Hollywood actors are currently on strike due to concerns over the use of AI in films. They worry that film studios could take control of their likeness and have them star in movies without their consent, potentially forcing them to take on roles they dislike or perform scenes they find distasteful. Additionally, there are concerns about actors not getting paid for their AI-featured performances. The Screen Actors Guild and the American Federation of Television and Radio Artists (SAG-AFTRA) are on strike until they can negotiate AI rights with the studios.
In a separate issue, AI-generated images used for training AI models are causing degradation in image quality. As more and more AI-generated images are used in training data, the resulting output becomes distorted. Researchers at Rice University have discovered that keeping the amount of these images below a certain level can prevent this degradation.
There are also reports of ChatGPT, an AI language model, performing poorly in math problems. One study found that its accuracy in checking prime numbers dropped significantly in a newer version of the model. This decrease in performance may be an unintended consequence of fine-tuning the model for other purposes.
While larger-scale AI training data sets have led to advancements in AI capabilities, there are potential downsides. Researchers found that models trained on larger data sets were more likely to associate Black female and male faces with criminality, indicating racial biases in AI models.
Another claim suggests that AI models can better identify targets compared to humans. However, the details of this claim remain undisclosed.
As Hollywood goes on strike and AI technology faces various challenges, the reliability and performance of AI models like ChatGPT may be called into question. It is important to address these issues and continue researching and refining AI models to mitigate biases and maintain reliability.