OpenAI's GPT-4: Improvements and Limitations

Image Credits: TechCrunch

OpenAI has released GPT-4, a text-generating AI model that has made improvements on its predecessor, GPT-3, by being able to give more factually true statements and understand images. However, it has been reported that GPT-4 still makes basic reasoning errors, with examples being given of it hallucinating facts such as describing Elvis Presley as the son of an actor. The president of OpenAI, Greg Brockman, stated that the model is different from its predecessor, with GPT-4 showing an improvement in skill in areas such as calculus and law, where it had previously been bad. 

GPT-4 has been trained on both image and text data, allowing it to take a prompt of both images and text to perform an action, unlike its predecessors. It has impressive image understanding abilities, with the capability of being able to caption and even explain in detail the contents of a photo. However, only one launch partner has access to GPT-4's image analysis capabilities, an assistive app for the visually impaired called Be My Eyes. OpenAI is evaluating the risks and benefits before deciding on a wider rollout.

The launch of GPT-4 has raised ethical dilemmas, such as preventing it from being used in unintended ways that might inflict harm, either psychological, monetary, or otherwise. OpenAI has stated that the model has undergone six months of safety training and that, in internal tests, it was 82% less likely to respond to requests for content disallowed by OpenAI's usage policy and 40% more likely to produce "factual" responses than GPT-3.5. 

OpenAI dealt with ethical dilemmas around DALL-E 2, its text-to-image system, by disabling the capability initially, before allowing customers to upload people's faces to edit them using the AI-powered image-generating system. OpenAI has stated that they are figuring out where the danger zones are and clarifying this over time. 

Post a Comment

Previous Post Next Post