Why is generative AI both exciting and terrifying?
In Take 5, Sheridan's thought leaders share their expert insight on a timely and topical issue. Learn from some of our innovative leaders and change agents as they reflect on questions that are top-of-mind for the Sheridan community.
Artificial intelligence’s use of predetermined algorithms and rules to perform preset tasks has enhanced our lives for decades, from optimizing manufacturing processes to recommending television series to providing a worthy opponent in chess. But the arrival of generative AI and its ability to create original content — including articles, images, videos, music and computer code — has many wary of its potential to disrupt the ways in which we learn, work and play.
In this edition of Take 5, Sheridan Faculty of Animation, Arts & Design Associate Dean Myles Bartlett breaks down generative AI’s capabilities, how it compares to past technological advancements, how educators should approach it and more.
1. What is generative AI and how does it work?
When my mom asks me about AI, I tell her that it’s machines doing things that a human would typically do.
To take it a bit deeper than that, various AI technologies have been conceptualized and developed over the last 80 years or so, ranging from computer vision systems to speech recognition to robotics. These technologies began to change exponentially in 2017 when language models were integrated into them, meaning that advancement in any one of those technologies could accelerate advancement in others.
Generative AI is difficult to define because it’s changing so quickly. But my answer today is that it’s predictive, using language models to produce answers and solutions — whether it’s anticipating what the next word would be in a sentence, or even converting an image into language, then using its language models to create a new image.
That’s a very basic description of generative AI that ignores a lot of steps along the way, but that’s essentially what’s happening. It’s thrilling and exciting, but it’s also absolutely terrifying.
2. What makes generative AI both exciting and terrifying?
AI’s power to enhance efficiency and accessibility could have many practical benefits, such as curing disease. Brain science is incredibly complicated, involving trillions of neurons with even more connections between those neurons. Science and research are often processes of elimination, testing a combination of variables to see what happens, then moving on to the next combination — which can take a very long time. AI can complete this process instantly.
But I also feel that we can’t even anticipate what all the benefits are going to be because if generative AI is fully deployed and is as impactful as some experts are projecting, everything will be restructured. Everything will be different.
That’s why it’s both exciting and terrifying: the benefits and impacts of generative AI will depend on how it’s deployed.
3. How does the arrival of generative AI compare to past advancements in technology?
When I was watching coverage of the Screen Actors Guild strike and heard Bryan Cranston say the industry won’t allow jobs to be taken away and given to robots, it reminded me of the union riots that took place when factories turned to automation. Just like when coal mines realized that a big machine with a bucket at the front could replace two individuals scooping a thousand shovelfuls of dirt, technology is coming to the doorstep of individuals who hold specific skills or knowledge.
“Just like when coal mines realized that a big machine with a bucket at the front could replace two individuals scooping a thousand shovelfuls of dirt, technology is coming to the doorstep of individuals who hold specific skills or knowledge.”
The advent of the internet represented similar threats to anybody who controlled information. Generative AI creates, understands and corrals knowledge, which has traditionally been the role of humans. People are trusted for the knowledge and experience they’ve accumulated over many years, and now somebody can spend an afternoon on ChatGPT to learn all the right things to say or do. And generative AI has already gotten to the point where its usage is virtually indistinguishable.
Every major shift in technology requires some sort of policy enhancement. For example, the internet required us to think about privacy law in ways that we had never thought about before and to change the ways in which we do surveillance, and that happened after privacy was lost. The risk of that happening with generative AI is so great that I think we need to have the policy first.
4. As an associate dean in Sheridan’s Faculty of Animation, Arts & Design, how do you believe educators should approach generative AI?
I can’t presume to know about all the challenges generative AI is presenting to various disciplines. I can only speak to the disciplines I’m more intimately aware of. It’s also impossible to predict where generative AI is going.
With those two things in mind, I’m advising my instructors that we need to be using generative AI in our classrooms. Not only do we have to be using it, we have to let our students use it, and we need to discuss our experiences together because we’re all learning at the same time. We also need to have meaningful conversations with our students about the ethics of using generative AI: not what it means for them as someone who wants to become a designer or an illustrator or a photographer, but what it means for them as a human being.
Assessment, of course, is the big question that always comes up about generative AI in education. We need to go through our curriculum and courses in a very practical way to determine if our assessment methods can be completed by sophisticated AI — even if that AI doesn’t yet exist, because it will in six months. If they can be completed by AI, we need to find other ways to make our assessments.
In the past, we may have assessed a painting in its totality, looking at the skill involved and the colour and the composition. But now that painting could have been created by ChatGPT, so I encourage my faculty to focus on the decision-making process, asking the students why they chose to use a certain colour or tool. The great thing about that approach is that it’s discipline-agnostic. Regardless of whether it’s a painting or a car part or a public policy, assessing the final outcome isn’t important. What we really need be asking is how our students are gathering information and making decisions.
5. What do you anticipate the usage and impact of generative AI will look like in five years?
Most technology follows an S-curve that is fairly predictable. It begins with a long tail of research, speculation and philosophy, thinking about what’s possible. Eventually, there are enough significant breakthroughs that the curve becomes vertical, representing progress in terms of moving into everyday life, usability and versatility. Then the curve levels off again with commercialization and innovation before things stall out.
“My hope is that in five years, generative AI is very strictly regulated and purposefully deployed so that we’re using it for the betterment of human beings, for things like medical research and climate modeling. However, the reality could be very different.”
When I look at generative AI, I think we’re right at the beginning at the vertical curve. People are still trying to figure out how to commercialize this, how to integrate it with their products, how to grab the largest portion of the market. We’re still at the level where people are creating things that they don’t know how they’re going to monetize, they just know people want them, so they’re putting products out there — often at a loss — to see what sticks.
My hope is that in five years, generative AI is very strictly regulated and purposefully deployed so that we’re using it for the betterment of human beings, for things like medical research and climate modeling. However, the reality could be very different because capital tends to win. If we as humans aren’t able to resist our base urges and our innate desire for the path of least resistance, people could use generative AI to influence the ways in which others think or make decisions or even vote. It could get really messy.
Myles Bartlett is an artist, designer and educator who currently serves as an interim Associate Dean in Sheridan’s Faculty of Animation, Arts & Design. He has spent 20 years teaching in various post-secondary institutions in the areas of Digital Art, Interactivity Design, Broadcast Design, Video/Film Production and Art/Design Fundamentals. Myles is also the former Art Director at TELETOON Studios Inc., where he directed the art department responsible for Promax BDA award-winning broadcast, print and interactive design in support of eight unique TELETOON Inc. brands across three national and two regional cable networks, including the TELETOON, TELETOON Retro and Cartoon Network properties. He holds a sociology degree from Carleton University, an AOCAD credential from OCAD University and a Master of Fine Arts from the University of Windsor.
Interested in connecting with Myles Bartlett or another Sheridan expert? Please email communications@sheridancollege.ca.
The interview has been edited for length and clarity.
Popular stories
- Sheridan once again ranked top animation school in Canada, second in world
- Sheridan to offer flexible online programs geared to busy working professionals
- Sheridan electrical engineering degree first of its kind in Ontario to be accredited by CEAB
- In photos: Convocation 2024 at a glance
- Sheridan welcomes two Indigenous Engagement and Education Associate Vice Presidents
Media Contact
Meagan Kashty
Manager, Communications and Public Relations