Wednesday, April 26, 2023

The Rise of Artificial Intelligence: An Apocalypse or Just Another Job Killer?

 


 There is a growing fear that artificial intelligence could pose a threat not just to jobs, but also to factual accuracy, reputations, and even the very existence of humanity, which calls for striking a balance between safety and innovation in regulating AI.

 

Because artificial intelligence (AI) is getting better so quickly, worries about its possible risks have spread like flames. When people think about AI becoming the Frankenstein's monster of the 21st century, it gives them the chills. It's like we're walking on a tightrope, and if we make a mistake, it could lead to a lot of unexpected things. Simply put, the rise of artificial intelligence (AI) has caused a lot of worry and raised a red flag about possible risks in the future. It's like a time bomb that could go off at any time and make our lives a mess.

Should we get rid of all jobs, even the ones that make us happy? Should we make minds that aren't human, which might outnumber, better, and replace us? Should we risk letting our society get out of our hands? In an open letter sent out last month by the NGO Future of Life Institute, these questions were asked. It asked that the most advanced types of artificial intelligence (AI) be put on pause for six months. Elon Musk and other big names in tech signed the letter, and it is the best example yet of how fast growth in artificial intelligence has made people worry about how dangerous it could be.

In particular, new large language models (LLMS), like the ones that run chatGPT, a chatbot made by a startup called OpenAI, have surprised even their creators with skills they didn't expect. These emergent skills include everything from being able to solve logic puzzles and write computer code to being able to tell what a movie is about from an emoji story summary.

These models could change how people interact with computers and machines, with information, and even with themselves. AI supporters say that it could help solve big problems by making new drugs, creating new materials to help fight climate change, or figuring out how to make fusion power work. Others think that the fact that AIS can do things its makers don't fully understand risks bringing to life the science-fiction disaster scenario of a machine that outsmarts its creator, which usually ends in death.

It is difficult to balance the opportunities and hazards because of the seething mixture of enthusiasm and anxiety that is currently present. However, there are things that can be picked up from other industries as well as previous transitions in technological paradigms. So, what exactly has evolved to make artificial intelligence so much more capable? How frightened should you really be? And what actions should be taken by governments?

Numerous published pieces of evidence have investigated the inner workings of LLMS as well as their plans for the future. When the initial wave of sophisticated AI systems arrived a decade ago, they were dependent on data for training that had been meticulously labeled. They might learn to do things like recognize photos or transcribe speech after being shown a sufficient number of instances that had labels attached to them. Modern systems do not need to be pre-labeled in order to be trained, and as a result, they are able to utilize far larger data sets that are acquired from internet sources. In practice, LLMS can be educated on the entirety of the internet, which explains their powers, both positive and negative.

The release of chatGPT in November brought to the attention of a more general audience the possibilities that were previously hidden. Within a week, one million individuals had used it, and within two months, 100 million people had used it. It wasn't long before it was being put to use in the generation of school essays and wedding speeches. The success of chatGPT and Microsoft's decision to integrate it into Bing, the company's search engine, inspired other companies to develop and distribute chatbots of their own.

The outcomes of several of these were really peculiar. For instance, Bing Chat advised a journalist that he should separate from his wife and start a new life somewhere. A law professor has leveled the accusation of slander against ChatGPT. LLMS generate answers that have the appearance of being factual but frequently include inaccuracies or blatant fabrications of the facts. Despite this, technology companies including as Microsoft, Google, and others have begun incorporating LLMS into their products in order to assist customers in the process of creating documents and carrying out other tasks.

The recent surge in the power and visibility of artificial intelligence systems, as well as the rising awareness of their capabilities and flaws, have stoked fears that the technology is now progressing at such a rapid pace that it will be impossible to govern in a secure manner. As a result, there has been a call for a pause, and there is growing fear that artificial intelligence could pose a threat not just to jobs, but also to factual accuracy and reputations, and even to the very existence of humanity.

Regulating AI: Striking a Balance Between Safety and Innovation

 

The worry that robots will take over jobs goes back hundreds of years. But so far, new technologies have created more jobs than they have taken away. Some jobs can be done by machines, but not others. This means that there is more demand for people who can do the jobs that machines can't do. Could it be different this time? Even though there have been no signs of a quick change in the job market so far, it is still possible. Before, technology tended to take over jobs that didn't require much skill, but LLMS can do some white-collar jobs, like summarizing papers and writing code.

There has been a lot of talk about how much AI poses a grave risk. Experts don't agree. In a poll of AI researchers done in 2022, 48% said that there was at least a 10% chance that the effects of AI would be extremely bad (like the end of humanity). But 25% of researchers said there was no risk, and the median researcher said the risk was 5%. The worst case scenario is that a very smart AI does a lot of damage by making poisons or bugs or by getting people to do terrorist acts. It doesn't have to be bad, but experts worry that future AIs might have goals that are different from those of the people who made them.

Such possibilities shouldn't be ruled out. But they all require a lot of guessing and a big jump from what we know now. Many people think that in the future, AIs will have unrestricted access to energy, money, and computer power, which are real limits today and could be taken away from an AI that goes bad. Also, when compared to other analysts, experts tend to exaggerate the risks in their own area. (And Mr. Musk, who is starting his own AI company, has a reason to want his competitors to fail.) Heavy rules or even a pause seem like an overreaction right now. A pause would also be impossible to enforce anyway.

Regulation is important, but not because it will save humanity. Concerns about bias, privacy, and intellectual property rights are real when it comes to AI systems that are already in use. As technology gets better, it might become clear that there are other problems. The key is to weigh the benefits of AI against the risks and be ready to change.

So far, three different ways have been taken by states. On one end of the spectrum is Britain, which wants to use a light-touch method that doesn't add any new rules or regulatory bodies but does make sure that AI systems follow the rules that are already in place. The goal is to get more people to invest and make Britain a superpower in AI. The United States has taken a similar method, but the Biden administration is now asking the public what a set of rules might look like.

The EU is getting stricter. Its suggested law puts different uses of AI into categories based on how risky they are. As the risk goes up, from, say, recommending music to self-driving cars, stricter monitoring and disclosure are needed. Some uses of AI, like subliminal ads and biometrics that can be done from far away, are outlawed. Companies that break the rules will have to pay a fine. Some critics say that these rules are too restrictive.

But some people say we need to be even tougher. Governments should treat AI like drugs, with a dedicated regulator, strict testing, and pre-approval before it can be used by the public. China is doing some of this by making companies register their AI goods and go through a security review before putting them on the market. But in China, politics may be a bigger reason than safety. For instance, one of the most important requirements in China is that Ais' work represent the core value of socialism.

How  should our society and governments  react to this new trend? It's unclear that a light touch will be enough. If AI is as important as cars, planes, and medicines—and there are good reasons to think it is—then it will need new rules, just like they did. So, the EU's model is the one that comes closest, even though its classification system is too complicated and a method based on principles would be more flexible. Requiring inspections and requiring disclosure about how systems are taught, how they work, and how they are monitored would be like rules in other industries.

This could make it possible, if needed, to make the rules stricter over time. Then, a dedicated regulator might seem like a good idea, as might international treaties like the ones that rule nuclear weapons, if there is good evidence of an existential risk. To keep an eye on this risk, governments could set up an organization like CERN (in French Conseil Européen pour la Recherche Nucléaire), a particle physics lab, that could also study AI safety and ethics—areas where companies don't have as many reasons to spend as society would like.

This strong technology brings new risks, but it also gives us a lot of amazing chances. To balance the two, the world have to be careful. Taking things slowly now can lay the groundwork for more rules to be added in the future. But now is the time to start building these foundations.

 

Notes

 

Agomuoh, F. (2023, February 13). Check Your Inbox — Microsoft Just Sent Out the First Wave of ChatGPT Bing Invites. Retrieved from Digital Trends: https://www.digitaltrends.com/computing/bing-users-will-be-able-to-test-out-the-integrated-chatgpt/

CERN. (2023). About CERN. Retrieved from https://home.cern/about

Future of Life Institute. (2023, March 22). Pause Giant AI Experiments: An Open Letter. Retrieved from https://futureoflife.org/open-letter/pause-giant-ai-experiments/

Grace, K. (2022, August 4). What Do ML Researchers Think About AI in 2022? Retrieved from AI Impacts: https://aiimpacts.org/what-do-ml-researchers-think-about-ai-in-2022/

Hu, K. (2023, February 2). ChatGPT Sets Record for Fastest-Growing User Base - Analyst Note. Retrieved from Reuters: https://www.reuters.com/technology/chatgpt-sets-record-fastest-growing-user-base-analyst-note-2023-02-01/

Huang, R. (2023, April 11). China Moves to Censor AI. Retrieved from The Wall Street Journal: https://www.wsj.com/articles/china-lays-out-strict-rules-for-chatgpt-like-ai-tools-32f70c89

Milmo, D. (2023, February 2). ChatGPT Reaches 100 Million Users Two Months After Launch. Retrieved from The Guardian: https://www.theguardian.com/technology/2023/feb/02/chatgpt-100-million-users-open-ai-fastest-growing-app

Noorden, R. V. (2022). How Language-Generation AIs Could Transform Science. Nature, 605(7808). doi:10.1038/d41586-022-01191-3. PMID: 35484343.

O'Brien, M. (2023, March 29). Musk, Scientists Call for Halt to AI Race Sparked by ChatGPT. Retrieved from AP News: https://apnews.com/article/artificial-intelligence-chatgpt-risks-petition-elon-musk-steve-wozniak-534f0298d6304687ed080a5119a69962

The Economist. (2023, April 20). Technology and Society: How to Worry Wisely About Artificial Intelligence. Retrieved from https://www.economist.com/leaders/2023/04/20/how-to-worry-wisely-about-artificial-intelligence

 

 

No comments:

Post a Comment

The Collapse of the Humanitarian Narrative Against Israel: The Truth Behind Gaza's Civilian Casualty Figures

  The humanitarian case against Israel collapses when scrutinized against the principles of just war and the manipulation tactics employed b...