“I’m actually fairly optimistic that this problem can be solved. We wouldn’t have to write down a long list of everything we care about, or worse yet, spell it out in some computer language like C++ or Python, that would be a task beyond hopeless. Instead, we would create an A.I. that uses its intelligence to learn what we value, and its motivation system is constructed in such a way that it is motivated to pursue our values or to perform actions that it predicts we would approve of. We would thus leverage its intelligence as much as possible to solve the problem of value-loading.” – Nick Bostrom
Being optimistic about the future – especially one in which involves a superintelligent A.I. – doesn’t thus mean that we should ignore any possible risks that’ll occur as a result of whatever we create. There will always be a risk at what we do, as Nick Bostrom lays out in detail as to the many possible risks that might arise from this A.I. not sharing our values.
That shouldn’t deter us, however, from pushing forward and achieving these amazing technological feats. Instead we should be adhering to the Proactionary Principle and begin discussing these risks insofar that we develop a strategy that’ll effectively mitigate these risks and ensure the greatest possible outcome. Bostrom envisions a superintelligent A.I. that’ll share our values and fight for us when we need its help the most. We’re not talking about a doomsday scenario, like in the [easyazon_link identifier=”B00B5AETMO” locale=”US” nw=”y” tag=”seriou03-20″]Terminator film series[/easyazon_link]; we’re talking about the conjoined existence of both humanity and A.I. Are you ready for the future?
Photo Credit: TED Talk