Adventures in AI: What's my role?
Part 2 of a series of reflections on collaborating with AI at work
By day, your mild-mannered poet works in technology. I have a 30-year career in software engineering, though my focus over the last 15 or so has been in parts of the lifecycle other than simply writing code. And as a fan of fantasy and science fiction, of course I’m fascinated by artificial intelligence.
As an aside: I know that generative AI is a massive problem for creators of art, music, writing, videos… creators of just about anything. I can also see that in corporate America, the ability to use AI as a force multiplier is quickly becoming a must. So here we are.
One thing I have found very helpful in working with NotebookLM is to tell Gemini the role I want it to play (and sometimes my own role as well). For example, if someone asked you “What’s a quark?” your answer would differ based on whether you’re lecturing in a university, chatting with your kids in the car, or tutoring a high school physics student. Similarly, an AI’s answers will differ based on the role you ask it to play. After all, I may be using Gemini as if it were an employee I’m directing, but I didn’t hire Gemini into a specific role, so it doesn’t have a job title unless I give it one!
I’ve also found it helpful to discuss how I’d like us to work together. Do I just want to say “Here’s the inputs, give me the outputs”? Or is it unclear what the outcome of the successful task should look like? I tend to operate somewhere between those two, so before I assign a task to Gemini, I’ve started discussing these expectations. Maybe I even ask “What do you think a successful outcome should look like?” Just like with human colleagues, Gemini has ideas I haven’t considered.
This brings me to the most important thing I’ve learned so far about collaborating with an AI: it’s a lot like collaborating with a human. When I delegate a task to a member of my team, I need to give them background context, equip them with templates and examples, confirm that they understand the problem, agree on outcomes, and review their work. These are all the same with AI, especially that last one. These are generative AIs, which means that their job is to generate or create things. And among the things they can create are fake information and spurious citations. It’s not evidence that the AI is bad, simply that it is generating content. So just like delegating to a human, delegating to an AI needs to include thorough review of the output, to ensure that everything is kosher.
I realize this post was more didactic and less reflective. Sometimes they’re just going to be that way. In my next post about AI, I plan to tell a story of what happened when NotebookLM glitched and then got all sassypants. Next week!