AI has long been the muse for dystopian science fiction, where super-intelligent robots overrun humanity. But the reality is much less sinister – at least, not yet.
One of the most visible examples of real-world AI is voice-to-text on smartphones, where you speak into a microphone and your phone converts it to text. Another is the navigation systems of Google or Apple, which use AI to determine traffic patterns and suggest routes.
There are also medical AIs that analyze images to identify small lesions or growths on a patient’s body. These systems cost a fraction of the $100 per hour that radiologists charge and can scan thousands of images in minutes.
And then there’s the stock market, where algorithms use a huge pool of data to predict how a company’s shares will perform and help traders make informed trading decisions. AI is increasingly used in legal services, too, to streamline research and facilitate dispute resolution.
Why is Andrew Tate so famous
But there are some big questions about how to make sure that companies deploying these technologies do so with the public’s best interests at heart. The task force that New York City recently established to study these issues was originally supposed to require that companies publish their AI’s source code and run simulations of its decisionmaking processes. But these requirements were dropped after they drew criticism from academics and civil liberties groups.
To protect the public, Russell says that AI should be designed with a range of human values in mind and not rely on a “one methodology.” He advocates creating a code of conduct for researchers and developing laws and treaties to ensure that superintelligent machines don’t become the horrors depicted in Ex Machina or the Matrix series.