Dec 23 2022

A Quick Review of Facilitated Communication

Facilitated communication (FC) is a technique that involves a facilitator supporting the hand or arm of a person with severe communication disabilities, such as autism or cerebral palsy, as they type on a keyboard or communicate through other means. The theory behind FC is that the facilitator’s physical support allows the person to overcome any motor impairments and communicate more effectively. However, FC has been the subject of considerable controversy and skepticism within the scientific community.

One major issue with FC is that there is little scientific evidence to support its effectiveness. Despite being used for decades, FC has never been rigorously tested in controlled, double-blind studies. This is problematic because it is impossible to determine whether the messages being communicated through FC are actually coming from the person with disabilities or from the facilitator. Some researchers have suggested that FC may be susceptible to ideomotor effect, which is when unconscious movements or responses are influenced by a person’s thoughts or beliefs. This means that the facilitator’s own thoughts and beliefs could be influencing the messages that are being communicated.

Another issue with FC is that there have been numerous cases where the messages communicated through FC have been shown to be incorrect or misleading. For example, in one well-known case, a woman with severe communication disabilities was believed to have communicated through FC that she had been sexually abused as a child. However, subsequent investigations revealed that the allegations were not true and that the facilitator had likely influenced the woman’s responses.

Given these concerns, it is important to be cautious about the validity of FC as a means of communication. While it may be tempting to believe that FC can provide a way for people with severe communication disabilities to express themselves, the lack of scientific evidence and the potential for misleading or false messages make it difficult to rely on FC as a reliable source of information. Instead, it may be more productive to focus on other, more established communication methods, such as augmentative and alternative communication (AAC) devices or sign language.

In conclusion, while FC may be a well-intentioned approach to helping people with severe communication disabilities communicate, the lack of scientific evidence and the potential for misleading or false messages make it difficult to rely on as a reliable source of information. Until there is more rigorous scientific evidence to support the effectiveness of FC, it is important to approach it with skepticism and consider alternative methods for communication.


As I suspect many regular readers here figured out, I did not write the above brief essay. That was written by ChatGPT based on the prompt: “Write a skeptical essay about facilitated communication.” For a narrow AI that’s just essentially a really good chatbot, predicting word sequences from its vast database without any real understanding, that’s pretty good. The essay was coherent, reasonably well written, had a structure to it, and of course is grammatically correct.

At the same time I find it lacking. It has no style, no flair, and no truly creative insight. It’s what I call a “book report” format – it reads like it was a book report written by a grade-schooler. Just get the facts down, follow an obvious format, but no more. It could have been written by anyone, or by a committee, and is suitable for an encyclopedia entry. In fact I suspect it is largely based on data scrubbed from Wikipedia.

I don’t always have time to bring my A game to my daily blog, but I try. I try to connect different ideas with a common deeper meaning, or add some unique insight into what I think is going on. At the very least I will try to layer in some humor, pop culture reference, or an interesting turn of phrase. I also have a certain unconscious style, created by my word choices and vocabulary, the logical pathways I tend to follow, and the way I build arguments. I have had the experience (in the context of gaming) of trying to write as another person, but my style was immediately picked up by those very familiar with it. By contrast, for lack of a better word, the essay above is “soulless”. It is dry, mechanical, and frankly a little boring, if informative.

What does all this mean? I think there is a good analogy to be made with AI art, such as that created by Midjourney. I have been playing with Midjourney for a few months now. It is an incredibly fun and useful tool. But similarly, I find the results dry and mechanical, without any unique artistic flair. It doesn’t grip the soul the way a great piece of human-created art can. At best it mimics such art. I found the most interesting results when I prompt the AI to mash up the styles of two known artists. I get their style, but with a twist. At the same time, I have seen the results that talented artists can get using Midjourney as a tool. I happen to know Michael Whelan (a famous Sci Fi / Fantasy artist) and I asked him what he thought about Midjourney. He was excited by it – he uses it as an idea generator.

What do these powerful AI applications mean for the future? For now, I don’t think that people have anything to worry about in terms of being replaced. Truly creative writers and artists cannot be replaced by this narrow AI approach. Rather, these can serve as tools for creativity. They are fun for hacks like me to play around with, but in the hands of an artist are just another tool, and a powerful one.

Regarding ChatGPT, it is incredibly versatile. It can write and debug code. Again, for now, that makes it a useful tool for coders, and non-coders may be able to manage some basic operations. But it won’t replace coders anytime soon (although keeping an eye on alternative careers may be a good idea). Perhaps its most useful function may be as the core of an AI digital assistant. It can “understand” natural language prompts, and return with readable results. It is not yet connected to the internet for new information (it is using only data scrubbed up to 2021), but when it is I can imagine a lot of functionality. It can book reservations and tickets (or at least find and suggest them, for you to approve and hit the “buy” button), find items for sale you are looking for, do internet-research, find specific information, and perhaps even help manage your social media. That all seems well within its capability.

Sure, it can also write essays for students to hand in as their own work. Right now teachers can use software to detect plagiarism, but such software will not help here. The essays ChatGPT creates are unique, not just copied. They are regenerated from the sources it scrubbed. Perhaps someone will come out with new software that can detect the products of ChatGPT. Meanwhile, teachers will need to learn how to detect it themselves. Or they will need to give assignments that cannot be completed with ChatGPT alone, and will have to grade students on their insight and creativity, not just getting the facts down in a coherent but dry format. This is certainly a challenge, but doable. In the end I can imagine that the existence of ChatGPT may improve education and grading, rather than destroy it.

As an analogy, calculators did not destroy math education. Teachers just needed to give students assignments and tests that would be a valuable marker of their knowledge even while using a calculator. Likewise teachers may need to design assignments with the assumption that students will use ChatGPT, but require some added value that would represent the student’s own work.

The more difficult question to answer is this – where are these AI applications headed? For any of the shortcomings I listed above, can they be fixed with incremental improvements to the algorithm? Can the AI be tweaked to add flair, humor, style, even some random elements to make them unpredictable? Or is what we are seeing a limitation inherent to this approach? Is this type of AI essentially a dead end when it comes to true creativity. If history is any guide, I would not bet against the power and potential of narrow AI. So far it has surpassed all of the milestones that experts said it never would. It could never beat a human in chess, until it did.

Then again, we tend to extrapolate new technologies in a linear fashion, but this is often not the case. Sometimes progress is geometric, but at other time problems are geometric and progress stalls. I keep thinking of high temperature superconductivity – we had a breakthrough in the 1980s, and everyone thought room temperature superconductivity was right around the corner, but it wasn’t. We are arguably no closer almost four decades later, and an entirely new approach may be needed.

Which path will applications like ChatGPT and Midjourney follow? In 10 or 20 years will the latest versions of these AI programs produce results that are indistinguishable (or even better than) human creators? Or are there inherent limits to this approach, and will the results forever remain “soulless”? I suspect we will find out soon enough.


Note: This is my final blog post of the year. Have a great New Year and Happy Holidays to all my readers. New essays will appear in January.

No responses yet