Disclaimer: The views expressed here are mine alone, and not necessarily those of my employer or any organization with which I am affiliated. These views are not intended to advertise or offer legal services to the reader, or to be relied on as practical advice in any respect. Apparent statement of facts may or may not have been specifically researched beforehand. Unless I expressly indicate to the contrary, the material appearing here is original work, subject to copyright protection. Any reference in the text to specific individuals or companies who are not explicitly named is unintended and purely coincidental.
Comments? You can (try to) contact me at admin (at) limitsofknowledge (dot) com. Keep in mind that I'm still learning the technical aspects of blogging, and do have a demanding job, so don't be offended if it takes me a while to respond.

Saturday, May 24, 2008 -- 4:45 pm

One of the many areas I need to become better acquainted with is, of course, the science of provability and completeness in formal systems. I'm generally familiar with Goedel's incompleteness theorems, which hold that for sufficiently robust formal systems, there will be statements that are true, and expressible in the language of the formal system, but cannot be derived within that system, and that a system in which all true statements were derivable would be useless, because the formal system would contain a contradiction and everything would be derivable, whether true or not. I'm also generally aware of the halting theorem -- attributed to Turing? -- that it is impossible to predict from the text of a program or algorithm whether the program will fail instead of producing a definitive result. I'm a bit fuzzy on the details, as I said. And I seem to recall writing earlier about treating genomes as programs for computational purposes, and whether the halting problem would stymie certain categories of advances in genetic medicine. My current question is whether the findings of Goedel, Turing, et al. should also be deemed to have implications for error correction -- and indeed, precisely how wide an application they can be deemed to have.

Starting with the incompleteness theorems, suppose for the sake of argument that we could reduce the science of error-anticipation and correction to a sort of formal system, and develop a rational and systematic methodology for analyzing a given procedure, identifying all of the potential errors in the system, and correcting the system to make it fool-proof and disaster-proof. I think we all know the common-sense rebuttal to such a hypothesis; it's called Murphy's Law. In other (more) words, we almost take for granted that there's an infinite variety of possible ways that something could go wrong, and we can't possibly anticipate them all -- we can only anticipate the reasonably foreseeable and the most likely among them. But is Murphy's Law a statement of Goedel's incompleteness theorem as applied to an error-anticipation system, i.e., there will always be errors that an internally consistent protocol would be unable to identify?

And one thing I'll have to pay attention to in my research is whether the halting problem is the mirror-image of these considerations -- that it's impossible to identify all of the causes that could cause a program to grind to a halt, or fall into some kind of futile and perpetual loop.

And at least two other areas suggest themselves for immediate interdisciplinary examination: is a political system, constitution, laws, and all, subjectible to scrutiny by the twin sciences of computability and fallibility? Somehow I suspect not, because of the profound malleability of legal interpretations and political structures. While I firmly believe that even the most well-designed political system is subject to undermining and collapse (the subject of a current work of fiction), political collapses do not always arise because of internal problems in the system. Many republics fell at the end of the 18th century, for example, not because the corrupting influence of power but, more prosaically, because they were overrun by Napoleon's army. That, in my view, proves very little about political science. Similarly, I wonder about the applicability of computability and fallibility to cognitive science and psychiatry. Some psychiatric problems have clearly identifiable physiological causes. But some, such as certain personality issues, are more strictly cognitive or behavioral. So my question is to what extent the sciences of computability and fallibility can be used to model, and potentially identify cures for, the dysfunctional thought patterns and cognitive habits at the root of so many behavioral disorders. Are there some psychiatric problems that are analogous to the computer failures modeled in the halting problem? And does that suggest that some categories of mental illness may never be entirely preventable or curable? These are questions I should look into at some point.

Friday, May 23, 2008 -- 9:23 pm

A while back I finished Reason's Human Error, which was enlightening, thought-provoking, and rich in potential for intriguing applications. It's also a field rich in possibilities for extension in a variety of disciplines, including management and communication. For example, people seldom anticipate the possibility of error in the words they choose for communication. We tend to have highly entrenched (even ritualized) views of what a word or phrase means, entrenched views about how another person would reasonably react to those words, and, more generally, an entrenched view of the truth of our assertion and the validity of our own argument. In my experience, people often seem to think that the view they're expressing is the definitive answer on a subject, and aren't willing to accept any kind of response but humble agreement. They're not really prepared for dialogue, and tend to ignore or brush past objections or interruptions without listening in their effort to express their message. Not only that, but they seem genuinely surprised when people respond, not with humility, but with righteous anger. But can there be any other reasonable response when you've suggested that someone has committed an ethical impropriety, or exhibited signs of substance abuse, or suffers from mental illness, or breached an agreement? The best communicators, on the other hand, think long and hard about how their words will be interpreted by someone listening to them from a different perspective, and anticipate how those other minds will react. A book I've been reading lately on jury selection suggests at least minimal efforts to systematize this knowledge, but I think there's much wider room for application than that. I'll probably reserve further expansion on this subject for a work that's been in progress for some time.

Friday, May 23, 2008 -- 9:21 pm

I've come to rely on the presence of spam in my inbox as evidence that my internet connection is working.

Thursday, May 22, 2008 -- 8:40 pm

From recent reading on personality styles and disorders, I gather that the schizotypal personality style is prone to certain kinds of magical thinking, belief in the supernatural, and the like. (I suspect that many people are inclined to such habits of thought without being schizotypal.) The literature also suggests that the personality style exhibited in adulthood is significantly influenced by childhood experiences including being at the receiving end of different parenting styles and the various relationships you form with parents and other family members.

In an altogether different course of reading, also nowhere near completion, there's apparently a line of scholarship that suggests that indigenous practices of witchcraft and assault sorcery are radicalized, a bit like religous fundamentalism, in response to colonial hegemony. My lay understanding is that devotion to practices like voodoo, Amazonian assault sorcery, African witchcraft practices, and the like, allow native populations to compensate for the sense of inferiority and disempowerment instilled when confronted with Western powers. (Or then again, maybe I've gotten it all wrong; my reading so far has really focused more on the beliefs themselves than on their causes, except to note that witchcraft practices seem to become more pronounced after contact with the west.)

This confluence of scholarship suggests a hypothesis. I wonder to what extent the radicalization of sorcery and witchcraft in colonized societies can be attributed to the disruption of family life that result from intrusion by an imperial power. The African slave trade, for example, surely left many broken homes, and I see reason to at least suspect that exposure to Western diseases, poachers, botanical and mineral prospectors, the diamond trade, and so on would disrupt the economy, the lines of authority in the household, traditional checks on civil strife, etc. Put another way: does the first generation born after colonization show a higher percentage of, say, schizotypal personality types than comparable but uncolonized societies? Do succeeding generations show fewer of this type, as society finds a new equilibrium? Are there counterexamples where a colony's prevailing level of superstitiousness does not vary with the political climate? Was there a change in American superstitions during the Revolution? What about Australia, Canada, and Ireland? And for that matter, within a certain society, are certain constituencies more or less disposed to magical thinking because of distinctive cultural norms about parenting and family life? This would be a fun project to tackle in my free time.

Tuesday, May 20, 2008 -- 9:11 pm

I suspect that an essential component of gaining perspective and maturity as you grow older is getting to know your own strengths and weaknesses, capitalizing on the former while overcoming (or, in the alternative, compensating for) the latter; knowing the workings of your own mind and understanding how your habits of thought may be similiar or different from those of others; and having the courage to solve problems the way you know best. Peer pressure is an interesting and pervasive phenomenon, as everyone knows from the fads and fashions of junior high, but I think its effects probably beyond lifestyle choices and consumer options. We try to seek social validation not only by speaking, acting, and looking like our peers, but by trying to reason and solve problems like them, too. This can have the effect of shortchanging our own unique talents, leaving us to look (and feel) dumber than we actually are because we've deliberately handicapped ourselves in choosing against the course that seems easiest and most natural.

An example I'm familiar with goes something like this. A has an intuitive feel for business, investing, and economic argument, but little interest or understanding of (say) history. B struggles to understand economic reasoning but is quite adept at extracting significant lessons from precedent and applying them successfully to new situations. A problem presents itself. A reasons through the problem according to his/her habitual means, and comes to a provisional solution. B could either (1) reason through it according to B's preferred methods, reaching a superior, inferior, or equally viable solution; or (2) struggle through it according to the methodology (A's) that carries social validation, arriving at either a "correct" or "incorrect" solution in A's view, or else giving up altogether and ceding leadership to A. If B takes route (1), there's a fairly high likelihood that A will dismiss B's arguments as misguided or irrelevant based on nothing more than B's choice of methodology, perhaps without even understanding, or trying to understand, the soundness of B's perspective. If B takes route (2), there's a fair likelihood that the resulting situation will reinforce for both participants that both A and A's methodology are superior to B and B's methodology, without any basis whatsoever in the power or validity of the method.

My general premise can be taken a few steps further, too. It certainly behooves B to cultivate not only his/her own native methodology, but to try to master A's as well, over time; you never know when it would be beneficial to bring another perspective to bear on a situation, and different methods are better at addressing different kinds of problems. By doing so, B also has an advantage over A (whether or not A recognizes it) by having a more versatile problem-solving repertoire, and being able to relate to A's frame of reference as well as to those who share B's conceptual landscape. But now think of A and B's common manager, C. It is clearly important for C to understand both A's and B's though processes and problem-solving techniques and the types of situations for which each is best suited; this not only helps C make optimal work assignments, but facilitates clearer communication in the workplace and enables C to anticipate potential sources of error.

Tuesday, May 20, 2008 -- 9:07 pm

I'm finally resurfacing from a pretty demanding work/life schedule, and hopefully will be posting more regularly again. It's not that the work/life demands have eased up any, so much as that I've been working on the organizational skills to manage them. Lately, for example, I've actually been doing the jobs of two people in two different professions. That has called for a few adjustments.