Brainstorming ideas for your next new product. Learning what to consider when designing a user interface. Summarizing the steps needed to start working with a new tool. Writing a fairytale or a poem about modern life in the style of a historical bard. These are some mundane—or wildly creative —tasks that can easily be done with generative AI.
Throughout the last year, new offerings have highlighted the potential for generative AI to change how we approach content creation drastically. Generative AI has the potential to enable knowledge workers to spend less time doing research and completing routine tasks and spend more time thinking creatively and strategically. But are we ready to let generative AI tools into our daily work? Will it carry the same problems with bias, lack of context, and concerns about inclusivity that came with the last generation of AI solutions?
In this article, we will summarize what generative AI can do and its impact on knowledge work. We’ll discuss the rollout of chatbots in search tools, highlighting some of the drawbacks and biases of generative AI. Finally, we’ll touch on its impacts on inclusion and where generative AI can go from here.
What is generative AI?
Generative artificial intelligence (AI) is a collection of technologies that create new content, including text, code, audio, images, video, and simulated data. It is an application of artificial intelligence that has been accelerated in the last few years due to investments by big tech leaders, notably by a company called OpenAI that Elon Musk, AWS, and Microsoft have funded.
Generative AI is based on a machine-learning model called the transformer. A transformer is a neural network architecture designed to understand context and sequential patterns—such as how to capture the subject of a picture, blend it naturally with its background, and style the whole thing. When combined with a language model, which predicts the words and phrases likely to follow given the previous ones, we get a chatbot. A generative model is trained on lots of data, like any other deep learning model, but then tasked to use the transformer to create outputs given prompts it has never seen before. OpenAI, Google, and other tech companies have pioneered research in this field over the last decade. In the past few years, they have applied huge amounts of computing power, training their models based on data from the internet.
Impact on knowledge work
Many with careers in information technology and analytics can be considered “knowledge workers.” This includes not only engineers, data analysts, and programmers but also web designers and systems architects. Knowledge workers spend their time researching and solving complex problems. Therefore, they need to have not only domain expertise but excellent communication skills. Generative AI has the potential to make these jobs more efficient and enable everyone to be more creative.
Instead of searching the web endlessly to understand a new idea, you can have a conversation with a chatbot to hone in on the details relevant to your task— or sate your curiosity.
Rather than scrolling through hundreds of stock images that don’t quite capture the excitement you want to convey, you can use an AI art generator to create a photorealistic scene of your users in a way that resonates with them.
As an alternative to passing marketing copy back and forth, you can ask a chatbot to edit your work. For example, it may offer ideas to expand upon, summarize a long description, check grammar, and translate your message.
Rather than creating the same code patterns over and over (called “boilerplate” code), you can use generative AI to automate even this task. You can prompt GitHub Copilot by adding code comments describing the desired logic. It suggests code to implement the solution, enabling a developer or data scientist to focus on the big picture.
We should think of generative AI as a productivity tool and a superpower that accelerates the creative side of knowledge work. As this technology gives us more power to complete nuanced tasks, we can spend more time on bigger ideas—that is, thinking strategically and innovating.
Rollout and impact on search
The compelling part about generative AI is the ease of use. You go to a web page, type in a question or task, and the technology will instantly respond. A chatbot will remember your past inputs and responses, creating context and enabling your experience to feel like a real conversation.
As a major investor in OpenAI, Microsoft has begun incorporating a ChatGPT interface into Bing. The initial response has been remarkable: the new search engine is, in many ways, a “marked improvement” over Google. Google, which has sponsored much of the foundational research in generative AI, has responded by releasing its own search chatbot, Bard. However, there is genuine concern about whether the technology is ready for the general public. Chatbots sometimes give factually incorrect answers. They seem to have their own personality and feelings. They can even be subversive, revealing dark, strange desires.
Thankfully, the perception of “personality” is something that can, with enough engineering, be tweaked. Remember that, at its core, a language model predicts the next word, phrase, or idea. So, adding the right amounts of randomness to the model allows engineers to experiment, with the results running from monotonous, to interesting and conversational, to gibberish. Microsoft has also begun limiting conversations with ChatGPT to a few interactions to avoid going down the stranger paths that have taken many people’s attention.
As for inaccuracies in chatbot answers, tech companies are also working on this front. For example, Google’s AI research lab DeepMind is working on a chatbot called Sparrow, which will cite its sources. Between these two giants’ ”arms race” on search, generative AI looks to impact how we all get information in the near future.
How does generative AI reflect diversity?
Like most technologies, generative AI is neither “good” nor “bad,” but its reputation will be formed based on its usage. One immediate benefit of generative AI is access to powerful, real-time language transcription and translation. For example, OpenAI’s Whisper listens to human speech and transcribes it in real-time. It recognizes 98 spoken languages and is robust to accents, background noise, and technical language. This means that it can be used to translate stories and wisdom from other cultures, enabling diverse voices to be heard with ease.
There are, however, legitimate concerns about how generative AI models can produce biased results, perpetuating racial and gender stereotypes. For example, in the summer of 2022, as OpenAI was preparing to launch its art generator DALL-E 2, researchers determined that it sexualized women and produced mainly black men when given negative connotations (prompted by, e.g., “man sitting in a prison cell”), and defaulted to white men in many situations.
Clearly, this is due to a need for more representation in the data used to train the models. The responsibility to make the results less biased lies with the organizations that create AI technology. This can be done by openly testing for bias and iterating as OpenAI has done. It can also be done by building teams with diversity. This is not done by achieving a “diversity score” but by finding people who bring requisite skills regardless of their backgrounds, being mindful of privilege, and fostering interest in technology roles early in a person’s career.
Where to go from here?
The current offerings of art generators, AI code tools, and chatbots are only the beginning of a new wave of AI. As these technologies mature, many other organizations will find more use cases that enable them to take advantage of generative AI in new and exciting ways. These products and services can enable more diverse audiences. Like all data products, it is incumbent on the data professionals who create them to ensure they recognize biases in training data and take responsibility for model results.
As responsible users of generative AI, we should pay attention to the prompts we input and look for implicit bias in the results these tools deliver. We should fact-check results from chatbots and not plagiarize the content they create. As programmers, we must ensure that we inspect and unit test automatically generated code before committing it. As creators, we should ensure that photos, videos, and audio produced with AI is inclusive, diverse, and representative of a broad range of cultures.
Generative AI is here today, with many tools existing and coming soon to make our work more efficient and our lives more creative.
Michael Rice, Sr. Consultant – Data Science, Swoon Consulting