A week since launch, OpenAI's ChatGPT has shown the power, and horror, of AI

Image created with DALL-E
An image created with OpenAI's DALL-E tool. (Image credit: Generated by DALL-E)

The concept of artificial intelligence is nothing new. In fact, there's a good chance that you've used something that relied on AI in the last 24 hours. But when OpenAI launched ChatGPT last week, it lowered the entry requirements for using AI.

ChatGPT is a chatbot that's accessible through any web browser. It's designed to be interacted with using natural language that feels like a conversation.

Microsoft's Azure AI supercomputing infrastructure is used to train the GPT-3.5 models that power ChatGPT. OpenAI and Microsoft announced a partnership back in 2019 that included a $1 billion investment from Microsoft and OpenAI exclusively using Azure as its cloud provider.

With ChatGPT in preview, we have real-world examples of how everyday folks will use AI, and that's both inspiring and horrifying. This week, BleepingComputer has shared some examples of the best and worst things that can be done with ChatGPT. The examples below were shared by BleepingComputer, and illustrate the range of the new AI chat tool.

Using AI for good

Like any tool, ChatGPT can be used for good, evil, and anything in between. OpenAI designed ChatGPT to be able to debug code, and it appears to do so very well. It even suggested a fix and explained why that fix was needed in the first example below. ChatGPT can also detect security vulnerabilities.

AI isn't just about coding and getting work done. Security expert Ken Westin played around with ChatGPT's ability to write while impersonating a person's style.

The downside of AI

Of course, there are downsides and dangerous aspects of artificial intelligence. Microsoft President Brad Smith has often discussed the need to regulate and legislate AI. His argument appears to have merit after looking at just a few examples of ChatGPT's darker side.

The power of AI being so accessible opens several cans of worms. For example, scammers can use AI to create convincing phishing emails. With the same tools that can be used to create beneficial software, ChatGPT can be used to write malware.

AI also has a problem with bias. Sexism, racism, and other types of bigotry can be worked into AI models.

OpenAI is open about some of the issues surrounding bias. "While we’ve made efforts to make the model refuse inappropriate requests, it will sometimes respond to harmful instructions or exhibit biased behavior," explained OpenAI.

"We’re using the Moderation API to warn or block certain types of unsafe content, but we expect it to have some false negatives and positives for now. We’re eager to collect user feedback to aid our ongoing work to improve this system."

ChatGPT can also be led to create responses that are offensive, hurtful, or misleading. These can range from Ultron-esque responses about humans being inferior and wishing for the extinction of the human race to writing offensive song lyrics.

Thise situations require human input, of course, but the powerful tools provided by ChatGPT make certain things easier to create. Twitter user Front Runner shared an example of an essay written by ChatGPT that makes an immoral argument (sensitive content warning).

A quickly moving industry

AI moves as quickly, if not more swiftly, than any other industry. Technology in the space improves at an astonishing rate, arguably faster than legislation or moderation can handle. The week that followed the launch of ChatGPT illustrates the diverse range of tasks AI can be used to perform.

Securing the future of AI will require those behind the technology to create tools for keeping things under control, as well as restraint by individuals that use artificial intelligence.

Sean Endicott
News Writer and apps editor

Sean Endicott brings nearly a decade of experience covering Microsoft and Windows news to Windows Central. He joined our team in 2017 as an app reviewer and now heads up our day-to-day news coverage. If you have a news tip or an app to review, hit him up at sean.endicott@futurenet.com.