Blogs

How to Solve a Problem Like Sophia: The Hardware Needed to Create an Artificial Intelligence

The late, great Stephen Hawking said it “could spell the end of the human race”, whereas Facebook founder Mark Zuckerberg insisted it would make driving safer and help to better diagnose diseases. Whichever end of the spectrum you lean towards, it’s unlikely you have no opinion on the controversial topic of artificial intelligence (AI). It has the potential to revolutionize the way we learn and has, in some form, already cropped up in our phones, our transport and our video games. However, even at this unbelievably early stage in the development of true artificial intelligence, it’s already having some surprising and often eerie consequences. Researchers at Facebook, for example, shut down two chatbots after they started successfully communicating with each other in their own bizarre and incomprehensible language.

The late, great Stephen Hawking said it “could spell the end of the human race”, whereas Facebook founder Mark Zuckerberg insisted it would make driving safer and help to better diagnose diseases. Whichever end of the spectrum you lean towards, it’s unlikely you have no opinion on the controversial topic of artificial intelligence (AI). It has the potential to revolutionize the way we learn and has, in some form, already cropped up in our phones, our transport and our video games. However, even at this unbelievably early stage in the development of true artificial intelligence, it’s already having some surprising and often eerie consequences. Researchers at Facebook, for example, shut down two chatbots after they started successfully communicating with each other in their own bizarre and incomprehensible language.

 

A-List AI

Things have progressed to the stage where AIs are becoming household names, as is the case with Sophia the Robot, a humanoid robot that runs on AI and has spawned an endless series of Twitter memes about her assumed inevitable destruction of the human race. Sophia, who was designed by Hong-Kong company Hanson Robotics and modelled after actress Audrey Hepburn, even became a citizen of Saudi Arabia in 2017.

AIs have also reached the point where they are beginning to overtake us in areas of life we might consider uniquely human, such as play. Numerous AIs have recently been tasked with “solving” popular games by being given the rules and playing hundreds of thousands of rounds in order to learn and calculate the best possible strategy in every case. Just one example is Google’s AlphaZero, which started off by beating the world champion of Go in 2017. As the British Go Association explains, the game is played by strategically placing black and white pieces on an empty board in order to capture the opponents and prove victorious. This same AI, however, went on to teach itself to play chess in four hours right before beating the world’s best chess software at its own game. These are both games that rely fully on strategy, but even when there’s an element of luck added it seems that AIs have started to beat humans. One point in case is blackjack. Even for humans, blackjack essentially relies on an understanding of probability, as Betway’s breakdown of blackjack strategy explains. An AI can’t guarantee itself a win but it can ensure it picks the most mathematically sound option each hand when it comes to hitting or standing. Perhaps even more impressive is the AI at the University of Copenhagen in Denmark which has begun to create its own card games, an even greater display of the kind of creativity we used to believe we monopolized. This could have some intriguing applications in the real world, other than just making some people feel a bit obsolete, since it could democratize the world of gaming by creating different rules for players of different abilities.

 

Tools of the Trade

One reason AI is suddenly on everyone’s lips is that the technological limitations that used to hold us back from achieving it are beginning to slip away. As hardware becomes more and more advanced, the cost of developing artificial intelligence and the space we would need to house it drops dramatically. For example, back in 2012, a NVIDIAResearch team working in conjunction with Stanford demonstrated that 12 NVIDIA GPUs (graphics processing units) could deliver the deep-learning performance of 2,000 CPUs or central processing units. This is important because simulating even a single second of human brain activity, as researchers at the Okinawa Institute of Technology Graduate University in Japan and Forschungszentrum Jülich in Germany did, requires a mind-boggling 82,944 processors.

Back in 2002, the fastest supercomputer in the world, the NEC Earth Simulator, was capable of 35.86 teraflops. This was after the expense of 60 billion yen and filled an entire warehouse with processors. Nowadays, you can build a similarly powerful computer for just over $3K, not to mention fit it under your desk. By far the bulk of this expense goes on the GPUs, each about a grand. This price point is primarily due to the fact that NVIDIA has an unofficial monopoly on the market at this time, but a number of startups have now arrived on the scene to challenge this monopoly, which could spell dramatically reduced costs in future. As with every other aspect of technology under the sun, Google has also tried its hand at this, producing the Tensor Processing Unit (TPU) designed specifically for Google’s own open-source software library, Tensorflow. Deep Learning doesn’t require high precision but instead a massive amount of computation, which is one important factor to consider when selecting hardware. Even more crucial, however, is the amount of PCIe lanes the CPU can handle, since the attached GPUs need to be fed with data at a super high rate while the machine is training. While the AMD Ryzen CPU can currently support 24 lanes, in comparison with an Intel desktop CPU which has only 16, by far the most impressive is AMD’s Threadripper, which came out only last year and can support an immense 64 lanes. In addition to processors, other components required include memory, an SSD, a power supply, a motherboard, a hard drive and a tower case. Once acquired, a hobbyist can assemble these components with just a screwdriver and screws to create a machine like the one below, capable of performing 50 trillion operations per second.

 

Advice for Beginners

All in all, the sophistication of the hardware required depends entirely on how sophisticated the AI needs to be. Statistical techniques are fairly minimal when it comes to hardware. If the machine is required to find patterns in a large amount of data on its own, hardware requirements are much higher. Thus a model intended purely for speech recognition and/or language learning can run on 400MB of data and thus be fit into a modern smartphone, whereas developing such a model requires a facility that uses the electricity of a small town and hundreds of millions of dollars.

This is because correlation must be performed between each data point, requiring immense chunks of data stored in memory, with RAM being the most common limiting factor when it comes to the complexity of machine learning. Throwing ten times the data at a problem will increase hardware requirements by a factor of ten, which is why bloggers like Jason Brownlee recommend learners start with something like an iMac i7so that the constraints force them to learn how to create good experimental design.

Creating what we would consider true intelligence on par with a human being, however, is impossible and will be for some time.

 

Related posts

Leave a Comment

* By using this form you agree with the storage and handling of your data by this website.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More