I put myself in the scruffies camp, probably, though I always had a yen for predicate logic and formal grammars. To my mind, some of the AI scruffies weren't scruffy enough, and tried to model human intelligence without any reference to psychological data. I tried to redress the balance a bit, and compared my program's output with psychological data on human inference during story comprehension. You can read all about it here.
At the time I did my Ph.D., I was pretty unfashionable, as I was researching symbolic AI approaches, while everyone around me seemed to be doing neural networks. However, I thought that while sub-symbolic approaches might produce intelligent output, I struggled to see how that would lead to a description of the solution, or anything that might be built on or added to by humans. If you're trying to program a reasoning system, for example, is it enough to train a neural network to create associations, or do you need to write something which can reflect on the process by which it reached its solutions? Neural nets are great for recognition tasks, but I was never convinced they were suitable for reflecting on how they completed the task. I'm sure there are plenty of counter-arguments to my limited opinion, so feel free to enlighten me.