top of page

While researching the capabilities of OpenAI's artificial intelligence-enhanced text generator, a professor at the University of Pennsylvania's Wharton School found that the company's GPT-3 chatbot was able to pass a final exam for the school's Master of Business Administration program.


Professor Christian Terwiesch, said that the chatbot passed the exam with a score between a B- and B. He said the score is proof of the bot's " ability to automate some of the skills of highly compensated knowledge workers in general and specifically the knowledge workers in the jobs held by MBA graduates including analysts, managers, and consultants."

Terwiesch noted that GPT-3 did an "amazing job at basic operations management and process analysis questions including those that are based on case studies." It was also "remarkably good at modifying its answers in response to human hints," he concluded.


The experiment was conducted with the GPT-3 model, a predecessor of OpenAI's viral ChatGPT bot. The advanced capabilities of the newer, viral model have sparked debates about whether generative AI signals the end for human employees. Educators have also expressed concern that the program could inspire widespread and virtually undetectable cheating.


In November 2022, Kevin Bryan, an associate professor at the University of Toronto, tested ChatGPT's ability to write graduate-level responses and concluded that "the OpenAI chat is frankly better than the average MBA at this point." Open AI, the creators of ChatGPT, are reportedly working on a watermarking system to potentially address these concerns.

Patterns purposely buried in AI-generated texts could help identify them as such, allowing us to tell whether the words are written by an AI or not. These “watermarks” are invisible to the human eye but let computers detect that the text probably comes from an AI system. If embedded in large language models, they could help prevent some of the problems that these models have already caused. For example, since OpenAI’s chatbot ChatGPT was launched in November, students have already started cheating by using it to write essays for them.


Building the watermarking approach into systems before they are released could help address such problems. AI language models work by predicting and generating one word at a time. After each word, the watermarking algorithm randomly divides the language model’s vocabulary into words on a “greenlist” and a “redlist” and then prompts the model to choose words on the greenlist.

The more greenlisted words in a passage, the more likely it is that the text was generated by a machine. Text written by a person tends to contain a more random mix of words. For example, for the word “beautiful,” the watermarking algorithm could classify the word “flower” as green and “orchid” as red. The AI model with the watermarking algorithm would be more likely to use the word “flower” than “orchid”.


The breathtaking speed of AI development means that new, more powerful models quickly make our existing tool kit for detecting synthetic text less effective. It is a constant race between AI developers to build new safety tools that can match the latest generation of AI models.

Getty Images has filed a landmark lawsuit against Stability AI, creators of open-source AI art generator Stable Diffusion, escalating its legal battle against the firm.


Getty accuses Stability AI of a “brazen infringement of Getty Images’ intellectual property on a staggering scale.” It claims that Stability AI copied more than 12 million images from its database without permission or compensation as part of its efforts to build a competing business and that the start-up has infringed on both the company’s copyright and trademark protections.


The lawsuit is the latest volley in the ongoing legal struggle between the creators of AI art generators and rights-holders. AI art tools require illustrations, artwork, and photographs to use as training data, and often scrape it from the web without the creator’s consent. Once the AI has been trained, it can create original and unique images.

Getty announced in December 2022 that it commenced legal proceedings in the High Court of Justice in London against Stability AI. However, that claim has not yet been served, and the company did not say at the time whether it also intended to pursue legal action in the US. Stability AI is also being sued in the US along with another AI art start-up, Midjourney, by a trio of artists who are seeking a class action lawsuit. Legal experts say Getty Images’ case is on stronger footing than the artist-led lawsuit, but caution that in such unknown legal territory it is impossible to predict any outcome. Stability AI wilfully scraped its images without permission.

bottom of page