Part 7: The Alignment Problem

It’s been a while since I wrote anything about artificial intelligence, but in the early days of this blog, waaaay back in the first few months of 2017, AI was almost all I wrote about. Here are some of my favorite articles from that time: 

The Last App 
What Biology Wants 

AI is Already a Better Artist Than You 

What You Need to Know About AI

Well, I finally found a good excuse to delve back into it, hope you enjoy.

(The image above is from the movie Ex Machina, a must watch for any sci-fi or AI fans out there.)


The central question of almost everyone with even a fleeting interest in AI is – is it going to surpass human intelligence, and if so, when? Some experts in the field believe super-intelligent AI, also referred to as artificial general intelligence, will be here within the next decade, while others believe we either still have a long long way to go or we might never get there at all.

I tend to side more with the latter these days, but regardless, AI has already delivered on predictions made decades ago that it was going to reshape the world. Of the current ten largest corporations, seven have AI as core parts of almost every aspect of their business. Any time you search for anything, order something online, use any form of social media, and much more, you are interacting and communicating with artificial intelligence.

But, with any talk of AI comes some level of fear from the public. From concerns that it will displace all human labor to being the first battleground of a new Cold War to robots suddenly waking up and enslaving us all.

However, the scenario that most AI researchers fret over is a little more subtle but potentially even more devastating. It is called the alignment problem, and simply put, it is the creation of a super-intelligent AI programmed with different fundamental goals from our own.

On the face of it that doesn’t seem like such a bad thing, but the extent of the problem is so grave that entire institutions have been built recently by some of the top universities in the world just to solve it. One example is the Center for Human-Compatible AI at UC Berkeley whose stated mission is “to ensure that artificial intelligence is developed to be safe and aligned with human-values.” Another is the Machine Intelligence Research Institute where the moto is “We do foundational mathematical research to ensure smarter-than-human artificial intelligence has a positive impact.”

The most well-known example of the alignment problem is Oxford Professor Nick Bostrom’s paperclip maximizer – a program with the sole goal of gathering as many paperclips as possible. Seems benign, but if it is truly super-intelligent than almost as soon as it is turned on it will realize that it needs to get rid of all humans so it can turn the entire earth into a paperclip factory. (Watch his TED talk for more, and for other AI related content check out the Computerphile series on AI. The video below is their take on the alignment problem)


The Neurodegenerative Alignment Problem


What does any of that have to do with neurodegeneration?

Well, it is a bit of a stretch but this field does have its own alignment problem that is existential to at least one of its stakeholders. We might like to think that we are all working towards the same overarching goal, but the reality is that each party has their own interests that often are not in-sync with one another and which ultimately slow down the development of better therapies.

I point these out not to cast blame, but simply to help define one of the core problems faced in the pursuit of therapeutic advances and to see if there is anyone out there with a clever enough idea about how to bring these interests a little closer together. I also want to make clear that most of the individuals lumped into these groups are in this with the best of intentions, some are just constrained in their ability to act by the group they belong to.

(Note: Parts of this problem were explored in more depth in a series of articles titled Planet Patient vs. Planet Researcher from Mariette Robijn, Dr. Simon Stott and Dr. Frank Church. Highly recommended!)


The Stakeholders


Academia – I wrote about their interests in my last post, so I’ll just quickly reiterate that the incentive structures in place that guide academics through their careers forces them to value individual achievement over collaboration. While it would be nice if a singular genius came along and figured this out on their own, the far more like scenario is that we are going to need to global networked solution to this very difficult problem.

Industry – Here promising new therapies butt heads with the constraints of the real world. At the risk of sounding like an apologist for big pharma, they probably have the most difficult role in all of this as they are forced to juggle the interests of almost all the other stakeholders. They must prioritize the needs of their investors and shareholders by producing a return on investment, while developing and maintaining healthy partnerships with academics to sustain a pipeline of therapies in development, while making sure they abide by all the rules thrust on them by regulators, all while desperately trying to deliver effective therapies into the hands of patients. If they fail at any one of those then they won’t be around for very long.

Regulators – Safety, safety, safety. They are there, in theory, to protect consumers, make sure nothing goes awry, and ensure that everyone abides by the prescribed standards and norms. They work slowly but are generally pretty thorough and act as gatekeepers on behalf of the patients and society to try to minimize any harm done. At times they do have a tendency to side too far in that direction, which does slow things down and makes some more experimental therapeutic options close to impossible to get through, but they do act as a useful counterbalance to the market forces and self-interests of the other parties.

Clinicians – Overworked and over-stressed, with more patients than they can reasonably handle. They have little time to worry about the needs of anyone except themselves and the patients in front of them. Most long for more effective therapies so they can deliver better care for their patients, but with 20 minutes to see each one, it is all they can do to just administer the tests that they have to, ask a few questions, guess what the right cocktail of drugs for this patient might be, and then move on to the next.

Funding Bodies/Patient Organizations – I lump them together because most patient organizations act primarily as patrons of the research community. Their priorities are to raise funds, and then figure out how to spend it in such a way that they continue to get funds. The vast majority ends up going towards translational research or trying to push therapies through clinical trials in the hope that that will be the best way to one day help patients.

Patients – Here we have the most selfish group of all, and certainly the most indifferent to the needs of the other stakeholders. They want effective therapies, and they want them now. But, since these are unlikely to show up on their doorstep any time soon, they’d settle for better access to care and more work on things that can be done to help them today. They are the least empowered part of the equation, and yet probably have the most potential to catalyze truly transformative change. If they could only figure out what direction they want to go in and spend less time blogging about it all.


There are other stakeholders involved: caregivers, NGOs, editors, therapists and many more that I have neglected but who also need to be part of any solution. But now that we have some idea of what each party wants, the question is, how do we get them all on the same page? Thankfully, according to Anton Chekhov, my work is done:

“The task of a writer is not to solve the problem but to state the problem correctly.” 

Though, if I’m being honest, I’m not certain I have even succeeded in that. Hopefully I have at least shed some light on this issue but I would be grateful to hear the opinions and suggestions of anyone who thinks I may have mis-characterized the intentions of any of these groups, or better yet, anyone who has a solution to the problem presented.

And if not it doesn’t really matter, we might only have a couple years left before AI is running the place anyway.






Leave a Reply