Preparing to train an AI-ready workforce in Tennessee


In a comprehensive talk that was easy to understand, Knoxville native Lynne Parker, a national expert on artificial intelligence, told an Oak Ridge audience of 125 people that “AI won’t take your job. It’s somebody using AI that will take your job” if you don’t learn to employ AI tools, many of which do not require skills in coding.

She was quoting Richard Baldwin, an economist who spoke to the 2023 World Economic Forum.

Lynne Parker delivered her AI lecture in the Zach Wamp Auditorium at the Y-12 New Hope Center in Oak Ridge.

Parker, who defines AI as “software that enables a machine to perform tasks that traditionally we thought required human intelligence and expertise,” said that AI will disrupt nearly every industry and state in our country.

“More than 500,000 jobs in Tennessee are expected to be augmented or replaced by AI,” she stated, adding that most jobs are “a whole bunch of tasks” and that AI tools will be used only for those tasks they can do better than you.

Her talk was the second one this year in the Dick Smyser Community Lecture Series sponsored by Friends of Oak Ridge National Laboratory, a Department of Energy facility that uses AI in some of its research. Smyser was the founding editor of The Oak Ridger and an FORNL member.

From 2018 to 2022, Parker was on loan from her computer science faculty job at the University of Tennessee to work in the U.S. Office of Science and Technology Policy, the science arm of the White House. There she became the founding director of the OSTP’s National AI Initiative Office created by a new law. She is back at UT as associate vice chancellor and has taken the lead on the Tennessee AI initiative.

People mingle at the reception before Lynne Parker gave her lecture.

To train an AI-ready workforce, this initiative is introducing the teaching of AI tools to students at Tennessee’s K-12 schools, community colleges, public universities such as UT and Tennessee College of Applied Technology schools. TCAT schools teach skills in advanced manufacturing, automotive technology, aviation maintenance and building construction.

Parker said AI-related employment is growing 40% each year in Tennessee. Industries expected to be affected are forestry, farming, manufacturing and materials, transportation and logistics, health, energy, information technology and hospitality and entertainment.

In her educational and entertaining talk, Parker spoke about the beneficial uses of AI, its ethical implications, its potential use for malicious purposes and her role in promoting AI’s benefits and reducing its risks.

She gave examples of humorous ways to use chatbots like Claude (Claude.ai), ChatGPT (chat.openai.com) and Bard (Bard.Google.com), which understand and generate humanlike text. In the past year, these chatbots have made the public much more aware of and awed by AI’s power and potential.

In one instance she gave Claude this prompt: “write a short political speech in the style of Forrest Gump about free chocolate.” In the chatbot’s response, the fake politician promised to “make chocolate free for every American” and oppose the politicians who “wanna put a tax on chocolate” because it raises healthcare costs.

Parker said that she uses chatbots to overcome writer’s block, to make her first drafts flow better and spark creativity in her writing. But she acknowledged that educators are concerned that students will cheat by using chatbots to write their papers.

Her advice to instructors is to “design your writing assignment so that it’s so specific to what is taught in class that it’s going to be hard for students to use these language models” to write an acceptable essay. She added, “This is not a solved problem right now, but there are a lot of smart people who are thinking about this.”

Lynne Parker became the founding director of the National AI Initiative Office, but has now taken the lead on the Tennessee AI initiative.

AI has been around since 1950, but over the past decade AI has become more capable. The reasons, she said, are the expanded amount of data available (thanks partly to many more sensors and cameras), faster computers and improved algorithms that can, for example, recognize patterns in data, including images. An algorithm is a set of rules a computer follows as it makes calculations using large sets of data.

AI technologies that many people have been using, she said, include search engines, Google Maps, Google Translate and voice assistants like Apple’s Siri and Amazon’s Alexa. If we visit a website to look at an item we might buy, or mention that we want an item in the presence of Alexa, we will likely see targeted advertisements of that item on other websites. If you watch a movie on Netflix, it will recommend similar films to watch.

Parker said that AI is being used for health monitoring. For example, the newer Apple iWatches can detect if a wearer falls and then suggest the watch be used to call 911. AI is used in smart home devices (e.g., to turn on room lights). It is being used in driver assist technologies and self-driving vehicles being tested. It is being used for fraud detection; you are alerted about an uncharacteristic charge on your credit card.

She noted that AI is being used in personalized learning. It can target gaps in a student’s learning and tailor the next lessons to get the student caught up with the class in grasping what’s being taught.

She listed many uses of AI in various industries: detection of an early sign of a tumor on an MRI image; discovery of chemicals that could make safe, effective therapeutic drugs; monitoring of cancer patients remotely; precision agriculture to increase crop yields; forecasts of weather impacts on the electric grid; designs of nuclear reactors with special features, and early detection of flawed aircraft components that require maintenance, saving money.

Parker then talked about the downsides of AI, which include threats to privacy, safety and security, as well as discrimination and bias.

Noting that AI image generation is much improved in the past decade and can be achieved using photos and even text, she said it’s possible to create a movie that uses realistic but fake images of a famous actor, so the producer doesn’t have to pay him or her. That’s a concern of movie and TV actors currently on strike.

“An AI system could generate a voice that sounds like mine and use it to call my husband so he thinks that I’ve been kidnapped and that he will have to send a million dollars to get me back,” she said. “It’s just a scam.”

She added that a risk to democratic elections is that AI could be used to create a video with fake images of a politician seeking re-election who makes untrue statements “that never happened but the politician in the video looks and sounds so realistic that a lot of people believe it’s true.”

According to Parker’s research, authoritarian governments in China, Russia and other nations are using AI for censorship, creation and spread of disinformation, and mass surveillance because of the ubiquitous presence of cameras and facial recognition technology.

“Authoritarian countries are using AI to repress citizens and squash dissent to make sure that their government prevails,” she said. “It’s quite easy for people in power to make it so you can’t get on a train or plane or have access to better housing if you say something against the government.”

Parker talked briefly about her role at the White House from 2018 to 2022. She was the final negotiator for the United States for the wording of the first international agreed-upon set of definitions, principles and expectations for AI among the 38 democracies of the Organization for Economic Cooperation and Development.

When the National AI Initiative Act of 2020 became law in January 2021, she became the founding director of the White House office that coordinates policies and activities across the federal government in AI research and development, education, workforce development, infrastructure and international engagement. The office she coordinated issued executive orders, strategic plans and memos on regulating use of AI in the private sector and the federal government, including the Department of Defense.

On May 16 this year, Parker and others testified before the Senate Committee on Homeland Security and Government Affairs on “Artificial Intelligence in Government.” She said the main topic was the ethical implications of AI, especially lack of transparency about how algorithms and formulas are used, especially when they cause harm to citizens.

Even though the U.S. has passed more laws regulating AI than have other countries, Parker said one poll found that only 35% of Americans surveyed said AI will mostly help people and 35% said it would mostly harm people. In Japan and China, the percentages were 61% positive and 13% negative.

The government’s goal, she said, is “trustworthy AI.”

AI has many benefits but “Americans can’t experience those benefits if they don’t believe in the technology,” she said.

As she indicated, on Oct. 30, the White House issued a new AI executive order based on the Defense Production Act that is designed to make AI safer.

CUTLINES: Lynne Parker delivered her AI lecture in the Zach Wamp Auditorium at the Y-12 New Hope Center. Photo by Carolyn Krause

Lynne Parker speaks on the national stage as an AI expert. Parker (third from right) is shown with the White House staff as President Trumps signs a 2020 bill that creates the National AI Initiative Office of which she was the founding director in 2021. Provided by Lynne Parker

Oak Ridgers at the reception before Lynne Parker gave her well-received lecture. Photo by Jim Golden


Leave a Reply

Your email address will not be published. Required fields are marked *