Skip to Main Content
Menu
Close Menu

Generative AI Discussion

 

George DeMet, Patrick Weston, Jack Graham, Oksana Salloum, and Rob DeVita discuss the fast-changing and often bewildering world of generative AI. They talk about some of the ways people are using these tools at home and at work, a few tips on how to use them in a safe and responsible way, and their potential impacts on culture and society. 

Transcript

George DeMet
Hello and welcome to Plus Plus, the podcast from Palantir.net where we talk about what's new and interesting in the world of open source technologies and agile methodologies. I'm your host, George DeMet.

[Music plays]

George DeMet
On today's episode of Plus Plus, we're going to talk about what's popularly become known as generative AI. These are software tools like ChatGPT, Bing, Midjourney, and others that utilize neural networks and massive amounts of data, much of which is scraped off the web, to generate text, images, audio, and video. This technology has actually been around for a long time, but what's notable about the latest generation of AI tools is their uncanny ability to mimic human language and create sophisticated works that, at least at first glance, can be easily mistaken for something created by a human.

So I'm here today with a few Palantir team members to help us sort through the fast-changing and often bewildering world of AI, some of the ways people are using these tools at home and at work, a few tips on how to use them in a safe and responsible way, and we'll also explore their potential impact on culture and society. So let's go ahead and get started with introductions. I just kind of want to go around and ask: Tell us your name and the AI tools that you've worked with so far.

Patrick Weston
Yeah, I'm Patrick Weston. I am a developer at Palantir, so my focus is a little geared towards that experience. I've mainly used ChatGPT and Github's Copilot tooling when it comes to AI work. I also have used ChatGPT in my personal life as well for various tasks, both small and large, so I'm sure we'll dive into it. But those are the two that I've used.

Jack Graham
Hi, I'm Jack Graham. I am a user experience architect and designer here at Palantir, and I have used ChatGPT and Midjourney.

Oksana Salloum
Hi everyone. My name is Oksana. I'm an engineer here at Palantir, and I've worked with ChatGPT and another tool called Grammarly. For personal experience, I use Remodel AI.

Rob DeVita
Hi, I'm Rob DeVita. I'm a senior project manager at Palantir and I've worked with ChatGPT, dabbled in Midjourney a little bit, and lately I've been using Miro AI.

George DeMet
And my name is George DeMet. I have worked with ChatGPT, Bard, Claude, Stable Diffusion, and Adobe's generative filters that are in the soon-to-be-released Photoshop beta. I thought we might get started by seeing how folks are using these kinds of tools in your day-to-day life, outside of work. Patrick, you had mentioned that you were using them in a few different ways. You want to elaborate?

Patrick Weston
I can think of two primary ways. The first is to get some sort of first draft for written communication. I'm planning a wedding in December, and I was using it to help me with invitation wording. I've also been doing some other speech writing sort of things, and it's been helpful with that. Then, I'm also doing a good chunk of trip planning. My parents have never been to Europe, and we're wanting to go as a family in the spring. I have another trip coming up, so I've been asking it prompts and trying to get information out of it regarding traveling. So, it's been helpful, I think, as a starting point for a lot of things. I've also used it in a more tactical way to do some repetitive tasks, but that's not quite as exciting. Writing is not my strongest suit, and I feel like I react better to having a draft of content. It's been really helpful to get something started that I can then tweak and edit to make it sound like my own.

Rob DeVita
Speaking of event planning, Patrick, my wife is celebrating a birthday soon, so we recently asked ChatGPT to help us select some baked goods from Costco for her Sunday brunch, and it did a pretty good job. I like to use the AI for food-related things. So, ChatGPT has kind of been my sous chef lately. I love cooking, especially the improvisational parts, but on the other hand, eggs are about $75 a dozen right now. So if I want to make a perfect soft boiled egg, I tend to ask the robot how to do that, and it gives me the answer.

Jack Graham
Outside of work, I'm a writer. So obviously, I have a lot of questions in my head about the fate of creative people in this new world. As for generative images, I've been working a lot with Midjourney, and I've dabbled with some of the other ones like DALL-E. Primarily to visualize things a little bit, I've had it draw pictures of characters for me or I'll give it a prompt describing something. For instance, there was a weird chest at the bottom of a harbor in one story I was working on, and I had it draw the weird chest with divers finding it. I find that it makes things more interesting, easier to visualize, and sometimes the AI comes up with stuff that you didn't expect. I haven't used ChatGPT for personal use much at all because I am a writer, and I've got a bit of a John Henry complex about that, you might say. I've dabbled with the search AIs a little bit, and so far I'm finding them terrible. The Google one is especially inaccurate right now, although GPT itself also just lies about stuff. I don't trust it at all as a search or a knowledge tool in many respects. Everything it puts out you sort of have to fact check still. I've had conversations with it where it said something that I knew was wrong, and I then said no, that's wrong, and it will finally admit to you that it's wrong and say, "I'm sorry, I don't have an answer to that." The final way I've been using it is as a language tutor. That's the one outside of work use that I've found really neat. It supports some pretty obscure languages that I happen to be studying, like Icelandic and Haitian Creole. You can have a dialogue with it in those languages, and it will detect what language you're using, and then it does a pretty decent job of keeping up with the conversation. That part's been interesting and seems to be a lot more accurate than some of the search information.

George DeMet
That brings back one of the challenges I've observed: even when it is completely wrong, it will sound so authoritative. In our little Slack channel that we have, Jack, you shared a post someone made describing ChatGPT as "mansplaining as a service", and it felt so real.

Jack Graham
It does feel really real, doesn't it? Because it doesn't hedge when it comes back with answers. A lot of the time, ChatGPT doesn't hedge, it goes all in on whatever disinformation it's about to give you. How did that egg come out, Rob?

Rob DeVita
Oh, the egg came out OK. I have had it lie to me as well. It's following the secret "fake it till you make it" protocol.

George DeMet
I asked it the other day to list some interesting articles that I had written. I've been on the Internet long enough that you know my name is out there. But it generated a list of 10 completely fictitious articles about digital marketing. And then I proceeded to ask it some more about my professional background and history. It just made up a complete backstory for me that was completely false. But people are using it to find information, and again, as we were saying, I think that's one of the challenges. It does sound so authoritative.

Jack Graham
It'll tell you all about the time President Kennedy landed on the moon in great detail.

George DeMet
Is that one you've tested, Jack?

Jack Graham
No, but go do it now.

George DeMet
Uh-huh. Oksana, I'm curious: how have you been using AIs outside of work?

Oksana Salloum
So ChatGPT I use, like other folks, to help me plan some events like my daughter's birthday party or some activities outside, or on the water, or like some fun acts for her. And there's another tool I use; I mentioned it's the Remodel AI, and I use that to remodel my house. It actually gave me some really good advice and ideas. Some were not really relevant or impossible to create in real life, though. It goes both ways. And the other thing is Grammarly. I use it pretty often when I'm writing some work emails or documents. Since English is my second language, this is my jam. I use it to make the sentences correct, and I want to make sure it's all right and people can read it correctly. That's pretty good. I think what else... that's the main thing I use. I feel we're using so many AIs without even thinking about how many there are. It's like, I don't know, FaceID or Alexa or Siri or something else every day.

George DeMet
That's absolutely true. I think ChatGPT and the other newer programs get a lot of attention. But that's a really valid point that a lot of the underlying technology has been built into the things we use for quite some time. The Photos tool on my iPhone that will pull together a special moment or tell me which photos are of which people in my family, for instance. Outside of work, I tend to play around with different chat engines. So Claude is one that is a bit newer. It's from a company formed by some of the former developers of ChatGPT. It's more focused on creative writing. One thing I did last weekend was come up with various historical figures and create a mini-bot that role-plays as that person. You can then ask the historical figure about things that have happened since they passed away. That was fun. We did use it to plan one of our children's birthday parties. But otherwise, I don't think I've been doing anything really useful with it. Just playing with it as a toy. I had mentioned Stable Diffusion before, another image generation app similar to Midjourney. I've been trying to see what kind of mythical creatures I can create with it. Speaking of how we've been using generative AI outside of work, how about at work? Jack, I know you recently did a really interesting experiment with one of our clients using Midjourney. I'd love to hear more about that.

Jack Graham
Oh, this was exciting. This was something I had been wanting to try in a live setting with a client for quite a while. I had done a little bit of work using Midjourney to brainstorm design ideas at a previous employer. But this is the first time I really had a chance to put it to the test. So what we did was we collaboratively created a mood board in Miro with a client. Mostly, the client was doing this. They created a mood board, which, you know, have been in use since the Mad Men era in design. And they existed even before that in architecture. Usually, you use a mood board as just a reference piece throughout your design process. You go back to it when you're trying to think about the look and feel of a piece or a set of designs you're working on. In this case, though, you can point the AI at the mood board. So, I fed the mood board our clients created to Midjourney as an image prompt. And then I did a whole bunch of takes. I used its interface for evolving the images. I had about 15 minutes while one of my colleagues was doing something else with the group. And then when I came back, I had a bunch of website and interface element designs that the AI had brainstormed for them based on the mood board they created. They put all the ingredients into the pot, and I made them some pretty good soup. We then discussed and voted on the designs that the AI came up with. From those, I was able to derive a set of design principles that will serve as our guide throughout the process. And we'll have some of those examples of AI-designed sites to refer back to. Somebody might say, "Hey, that one was cool. Can we do something a bit like that?" So it's like a mood board on steroids, really. It takes the mood board and extrapolates from it in all kinds of different directions to give you something you might not have come up with on your own. This works very well for graphic design because at any given time in the world of graphic design, there are probably only about a dozen people doing really original work. Everyone else is working with a set of styles, aesthetics, this year's colors, this year's way of pushing pixels that we're all drawing from. So this shake-and-bake approach to getting a design works especially well for websites and mobile.

George DeMet
That was such a cool exercise. What I really love about it is that the truly creative part seems to be done by the humans. And you're using Midjourney to synthesize what's coming out of the human designers' brains and put some order to it.

Jack Graham
And this is something we've been striving toward for a long time with AI. Having AI serve as a tool. I refer to the procedure, really, as harvesting. We're finding wild ideas, and we're taking the ones we think we might be able to domesticate. The AI is perfect for that. It can give us all these different permutations of how something could evolve, and then let us contrast and compare. It's very satisfying for those doing the exercise. With normal mood boarding, it's very participatory to begin with. But with AI mood boarding, you can then take the output and quickly see where it might lead. That's really powerful for sparking ideas in the client's mind.

George DeMet
Now, Patrick, you said you've been using it for coding.

Patrick Weston
When Jack was talking, I was actually thinking of some parallels on the code side as well. Obviously, it's not as graphic or creative in a visual space, but it's definitely very helpful in doing some rudimentary coding exercises. It almost acts as an advanced clipboard, remembering things I have done before, but also doing some of the conceptual work. It comes up with structures and frameworks for things and answers questions that are more complicated than you might get out of a search engine. I mainly use Github's Copilot for the advanced clipboard. The idea is that you can add comments to your code, and it'll suggest code to meet that need given the comment. There are lots of things, particularly in Drupal development, where there's a long chunk of code that's pretty straightforward. However, there are many instances where you could forget syntax or not call something the right thing. In some cases, something might be called "type", but somewhere else it's called "bundle". So, there are all these little details to remember. It does a great job figuring out that sort of thing. I found pretty good success with it. Like I said, it is more narrowly focused on a couple of lines of code or a section like that. But then I've used ChatGPT to do some of the conceptual work, and this is where it gets a bit hit or miss. But when it's right, it's really powerful. The cool thing about how ChatGPT responds to coding requests is that it knows the concept of multiple files and how they interact with each other. You can ask it to build some sort of complex technical thing, and it'll show you the three or four files you need to get there. That's been super helpful. I find that it's similar to what we were talking about earlier. You have to know and provide a human touch or the creative element. You have to know what you're talking about to understand if what it's giving you is good, and also to clearly describe what you're looking for. There were instances where it wasn't helpful, but I think it's because I wasn't phrasing something correctly. It's not replacing a developer, but I feel like it is helping me be more creative when it comes to the coding side. It's been really cool to see so far. I feel like I've been able to not only speed up but do things that are a bit more complex than I could in the past. It really explains what it does. It doesn't just spit some code out at you, so I feel like I'm learning on the fly.

George DeMet
Rob, you mentioned before you've been using Miro's generative tools.

Rob DeVita
I've used it for some fairly straightforward stuff. As a project manager, communication is crucial, and I tend to be a bit verbose in my writing sometimes. ChatGPT can be great for giving it some text and asking it to make it shorter without removing any of the meaning within it. It does a pretty good job. As for Miro AI, it's been really helpful and fascinating. For those who don't know, Miro is a digital whiteboard tool for remote collaboration, and they recently added a suite of AI tools powered by Microsoft Azure AI. This includes everything from generating an image based on some text you've written mid-journey style, to drafting acceptance criteria on a Kanban card, to generating branches on a mind map with questions, ideas, and topics. Similar to what Jack and Patrick said, it's not replacing the work we need to do, but it helps to speed things up. It streamlines some processes that we've accepted as day-to-day work. One example was when I met with a teammate to do some co-working, and we tried out the Miro AI tool. What would have taken us about 30 minutes to come up with baseline ideas and questions, we got in about 30 seconds with Miro AI. That didn't complete our work, but it advanced us 30 minutes into the future, where we could pick that up and run with it. It made things a lot more efficient and improved our work.

Jack Graham
You know, it's funny that you mentioned the image editor aspect of this because I just realized, as you're talking, the thing that I found most useful AI-wise that I use all the time is the select subject feature in Photoshop. It's a new feature, it's AI-driven. It's been there for a few years now, and it has the ability to do a selection within a photograph based on saying, "All right, that's a person. I'm going to get all of that person." We're not talking about that kind of AI very much right now because machine learning and deep learning aren't as cool as they were a few years ago. Now it's generative AI, but that stuff is all still there too, and part of our lives, so we sometimes forget that it's there. I certainly did in that case.

George DeMet
And then, Oksana, how are you using AI to help you with work?

Oksana Salloum
I do play around with AI, but I do not integrate it in my work life because I'm always extra precautious, and I always want to make sure it's safe to use it. Like Patrick said, sometimes when you ask a question, it'll give you some concept, but it doesn't give you the final answer. And the other thing I notice is the more details you give, the clearer it is. I'm kind of on hold to using it in my work life.

George DeMet
That makes sense, and we'll talk more in a little bit about some of those precautions that folks should use when they are working with AI tools at work. I've been using primarily ChatGPT, but also Bard or Claude for writing tasks. I do a lot of work with the marketing side of the house here. It's a really useful tool to help take existing text and condense it or transform it. One example would be a blog post that we published recently that was adapted from a presentation. We had the transcript for the presentation, so I took that into ChatGPT and asked it to turn it into a blog post. We still needed to edit the copy manually, but it saved a lot of time. The transcript that will accompany this podcast will very likely use AI because the software-generated transcript captures the words correctly but doesn't always break the sentences in the right places. So, I've been using it very much as a copy editor and also to draft some tweets for our Twitter feed, things of that nature. Another great example was when we were responding to an RFP; they had word limits for their questions, so it was a great way to ensure our copy remained within those limits.

Rob DeVita
George, you're reminding me of another way we've tested the use of AI here at Palantir. We provide ChatGPT with a template. So, if you're looking for sets of data but want it formatted the same way, you can give that to ChatGPT. Say, "Here is the template I would like you to use." It does a pretty good job, although there might be some cleanup needed.

George DeMet
That's a really great example. Text transformation really plays into the strengths of these large language models. They understand human language really well because they've been trained on countless data points. So when you ask it to produce something in a particular pattern, it does a really good job. So, what precautions do you take when using generative AI tools, and what would you recommend others keep in mind?

Patrick Weston
One of the precautions I take when using ChatGPT for work is turning off the chat history. That should prevent ChatGPT from reusing anything I feed it. It's a bit annoying sometimes because it'll lose context. I've also started redacting client names when I feed info into ChatGPT. For instance, we often name Drupal modules with the prefix of the client name. I replace that with something more generic like "my module" and then do a find and replace on my own. From a personal perspective, I do have the chat history on, but I don't think I've shared anything too sensitive.

Jack Graham
So there are four main areas here that are risks that I try to keep in mind: Security, accuracy, disinformation, and economics. From a security standpoint, I work with government clients. I have to tell them before we do an AI mood boarding exercise, for example, "Don't put anything confidential, classified, or unpublished into the mood board." They're government employees. They already know that, but I do have to say that anyway, and likewise with any confidential client or government information.

Accuracy is a big one because I find myself frequently having to question the AI's answers. For instance, sometimes it just gets things wrong. I asked it to translate the Bill of Rights from late 18th century English to English as it's spoken now. And that was interesting and has a lot of implications for the legal system, given that our government now dictates things be written in plain English. Is that retroactive to the Constitution itself? Who knows, but it has a lot of interesting implications. One thing that I did was when it translated the 1st amendment, it used the term "free expression." I immediately stopped the generation and said, "Wait, 'free expression' is never used in there, and you're not translating; you're just making stuff up now." So the AI then backtracked and said, "Oh, yeah. You're right, 'free expression' isn't actually mentioned. Here's a real translation." So it'll try to pull a fast one on you.

From a disinformation standpoint, we do not want to be spreading dezinformácija by accident. For instance, if I create an image just to make a funny meme, and it looks too real, I'll say this is by an AI. I think people should start doing that.

And finally, for economic reasons, I try to think about the economic implications of anything that I use AI for. For example, I have done some independent game publishing. I would not replace my paid human illustrators with an AI. I don't feel right about that. I would not use a self-driving car because 30% of our economy relies, in some way, shape, or form, on people driving. If you put 30% of the economy out of work, the rest of us are pretty hosed too, because of the effects on consumer spending. We have to look ahead to that and think carefully about what we're letting AI do for us. As knowledge professionals, the idea of using AI to sort of pan for gold and get the good ideas that you need to do your work, to organize thoughts, that's all legit. That's something that's coming. We're going to have to adjust to it and live with it. But we have a choice between using it in smart ways like that and using it to straight up replace people. I hope we'll be doing a lot of the former and very little of the latter.

George DeMet
I recently read an article by Ted Chiang, the science fiction author who's written some really good articles about AI. One of the recent articles that he wrote was talking about the danger of AI, not so much as a Terminator-like apocalypse or the end of humanity, but that the real thing we needed to watch out for was that it would be used to further entrench capitalist power structures and inequality. Jack, you raised some really good examples there of different ways that AI could be used by very powerful, very well-resourced people and corporations to undermine those who have less.

Jack Graham
Yeah, Ted Chiang's work is 101. If you want to read a sci-fi writer who's really thinking deeply about how this stuff is panning out, he is a must-read.

Oksana Salloum
I want to say the new technology generally can make, process, and produce something cheaper and easier, but at the same time, it can mean you do the same with fewer people, or you can do much more with the same amount of people. If you think this way, AI might not be that negative for some people.

George DeMet
It is interesting because it seems like every time a new technology comes along, the pitch is always, "Oh, it will make your life easier, and you'll now suddenly have more free time." But the reality, as you say, is either that we get asked to do more stuff to fill in that time, or the people who formerly did those tasks are put out of work completely.

Rob DeVita
Jack, I'm definitely with you on the self-driving cars thing. There's a lot of cool things you can do with AI. We've talked about that a bit. So, you know, generating some cool art for your Dungeons & Dragons campaign, that's good. Having a driverless car barreling towards me, taking a picture of my silhouette and trying to guess my BMI so it can run the trolley problem on me? Not so great. I know I'm exaggerating a little bit, but not that much. Safety and security are not being taken seriously enough. And I don't just mean data security. I don't mean just what we should be entering into the ChatGPT field and submitting. I mean the personal safety of the user. A lot of companies are not only throwing caution to the wind; they're actively pushing AI on their users. I would encourage people to be very careful about the data they're providing to these AI models, even if they say they're not keeping their history because, as we know, some of these social media companies have also said they're not sharing any of that data and we know where that went. A team member of ours recently shared an episode of a podcast called Your Undivided Attention, and the episode was called "The AI dilemma." I would encourage folks to listen to it. There is a pretty horrifying story about Snapchat's AI bot. It reminded me that I have a Snapchat account. I opened Snapchat, and when I tried to remove that AI bot from my friends list, I could not. If there is a way to remove it, I couldn't find it. It's not easy, and that feels very gross to me.

George DeMet
We could do an entire episode on digital surveillance and the ways that our information, our browsing history, everything is used and monetized without our knowledge or consent. I certainly do share the concern that AI can supercharge that in pretty scary ways. 

One of the things that I've been thinking about and am concerned about when it comes to the code side of things is that we work with open source software. The ability to open source software relies on your ability to copyright the software. The US Copyright Office has determined, they've published a memo, that says that if something is wholly or substantially generated by an AI, they won't copyright it. So they draw a distinction between a work where you're using AI to assist. The analogy would be if you use Photoshop to modify an existing image that a human photographer took, then that is still copyrightable because the human being directed the creation of that work. However, a completely computer-generated image, even if you gave it the prompts and everything, would not be eligible for copyright. That's what the Copyright Office's current guidance states. 

I do worry about the implications this would have if people are generating large amounts of AI-generated code. Technically, that code can't be made open source. First of all, it may come from a place where it's already been copyrighted or released under a different standard, and there are lawsuits about that right now. If you can't hold a copyright on the code, you can't attach an open-source license to it. I know there are folks who are thinking about this, concerned about this, and working on this problem. But I think, in the meantime, those of us working with AI in software need to be really cognizant of it.

Patrick Weston
I've found that, I don't know if it's a detriment or a shortcoming of ChatGPT, but maybe it helps fulfill some of this. I've never gotten a code snippet from ChatGPT that I can just copy, paste, and run. There's always something wrong in a couple of places. Jack, I think you're talking about accuracy. One benefit on the code side is that you can put it in and see if it works or not. There's a yes or no in terms of the accuracy of something that ChatGPT gives you. I've had it just make up entire functions, and I'll be like, "Hey, I'm not seeing that. I'm getting an error that that function isn't a thing." And it'll be like, "Yeah, you're right, that's not." It's even making up functions and things that you would think it would have a well-defined library for. I guess as of right now, it feels like the copyright side is somewhat protected by ChatGPT's own shortcomings of being able to give you really copy and pastable code. But that's not to say it won't get better in the future, and that problem would definitely be something to consider.

Rob DeVita
It does act like a hyper-intelligent toddler in that way, Patrick. It can be so confident, and then when you reason with it, it realizes the error of its ways. But it's fascinating that it's so confident, as we said earlier in this conversation.

George DeMet
I want to know what toddlers you know that recognize the errors of their ways.

Rob DeVita
That's a fair point, George. Retracting my statement.

George DeMet
Was I just talking to Rob, or was I talking to ChatGPT? Who knows!

Patrick Weston
I was about to say the same thing.

George DeMet
We've talked a bit about some of the ways you can use generative AI technologies to make things easier, better, or to assist on a personal level and in the workplace, as well as some of the precautions and dangers of overly relying on AI technologies. I'm curious how you think generative AI will impact culture and society.

Rob DeVita
I think that capitalism has been obscuring and even ruining some of these valuable outcomes of AI. Some larger corporations have been thinking about how they can replace human labor to save money and cut corners, and that's the wrong approach. There's a nuance and grace to human labor, creativity, and empathy in our work that AI is not ready to replace. That's why we see more and more strikes by labor unions. It will keep happening if we treat AI as the answer to all the budgetary problems these companies have. AI should not replace people; it should help people and enrich our lives.

Patrick Weston
I agree with you in large part, Rob. The one thing that I'm also concerned about is the bias that gets baked into a lot of the AI algorithms and decision makers. Whether that's facial recognition and certain data sets that it gets trained with, or, I don't know, there's just all sorts of bias. I feel like we can both program that in and also try to fight against that. I'm worried about the implicit bias that comes with a lot of things and how that will impact people in their day-to-day lives.

Jack Graham
I'm very excited about using it to replace billionaires. I had it write a better business plan for Twitter, as if it were Reed Hastings, and it made a heck of a lot more sense than anything that Mister Musk was doing. You know, he's actually pretty useless, and when you can demonstrate that an AI can think about problems more clearly than him, the whole house of cards that's propping up the mega wealthy kind of comes falling down. Corporations need leadership. They need experience. They need people to hold them together. We have this wonderful model of servant leadership at our company that works very differently and operates very differently than what you see in the Silicon Valley tech world. And when you consider the fact that our economy is basically run sometimes by a bunch of gamblers, really where we should be applying AI is to business decision making, investing, finance, currency modeling, things like that, where rational decision making instead of stupid risk-taking would make all of our lives better. But that's not what's going to happen. We're going to use it to replace drivers, and then we're going to have an economic apocalypse.

Oksana Salloum
After all the things that have been said, I still see a positive impact on us. Yes, it's in our daily life. It makes our life easier and more convenient. We have better access to education or to learn something, or to increase our productivity. That's how I see that.

George DeMet
I see, you know, again thinking about not so much the hypotheticals but taking a more practical perspective. I think it'll be like most technologies, right? It'll make some things better and some things worse. I do have concerns about how those who are very powerful and wealthy will use it to try and remain powerful and wealthy, and become even more so. But I also think it will hopefully be a tool that might help us with some scientific advancements in various fields. I know people are eager to work on applying it toward the problem of climate change. At the same time, the technology itself also has a massive carbon footprint. I am really worried about the impact on the usability of the web in general. We've already started to see so much garbage AI-generated copy flooding every part of the web, which makes it more difficult to find good quality information. That concerns me greatly.

Jack Graham
Science fiction to date has largely concerned itself with AI personhood - machines becoming fully sentient and sapient, wanting their rights, or wanting to kill us. Part of the problem is that we need a better vocabulary for AI. Right now, we call it all "AI", but there are clear differences between the generative AI we're excited about right now, deep learning, machine learning (which are forms of AI), and the type of AI, the generalized intelligence, that we talk about for the future. Sci-fi has been overly concerned with generalized intelligence and hasn't spent enough time talking about how more limited forms of AI could cause a lot of disruption as well.

George DeMet
Let's just go there right now. This is a really important question because what we see in culture really impacts how people think about these technologies. Especially when they're called "artificial intelligence" when there's actually nothing intelligent about them. It conjures up memories of HAL 9000, the Terminator, or other classic science fiction icons.

Jack Graham
I am glad you brought up 2001 because it's a great example of this. So in 2001, there are iPads, but nobody really followed up on that. iPads and mobile devices like that are not that big a deal in that world. They're just a technology. Arthur C. Clarke never thought of any of the second order uses of that; he never thought about how having a tablet would affect society. It's just there in the background. The same thing has happened with AI. You have science fiction writers all the time. They'll drop references to, you know, the AI that makes all the decisions about taxes or the accounting AI or whatever that's in the background, the AI that runs our spaceship. As if these are all just kind of background technologies that are just sitting there and they don't have any really big effect on society. Meanwhile, we're distracted by the Terminator.

George DeMet
I actually did my college honors thesis on 2001: A Space Odyssey. So, yeah, it's a film I know very well. Because in doing the production design and research for the film, they talked with a lot of people in technology in the field as it existed in the mid-60s, they did get a lot of those background technologies right. But I'm curious what other examples folks are particularly drawn to.

Rob DeVita
I've been a longtime fan of The Hitchhiker's Guide to the Galaxy, so Marvin the Paranoid Android really resonates with me. Something about a robot that's been given a giant brain and it's so intelligent that it makes them both bored and depressed is a little bit too real.

Patrick Weston
One of my peak nerd things that I do is I name my electronic devices after assistants or cool sciency people. So my computer is named after Ada Lovelace, who is like one of the first programmers. Others I have are Watson and one of my devices, but my Raspberry Pi that I have at home that does some light server things around my house is named Jarvis from the Iron Man movie. I realize it's not in the way back Marvel canon, but when I was a younger kid, I appreciated it and it seemed like a cool thing. So I named it that.

Oksana Salloum
My favorite is Wall-E.

George DeMet
Wall-E, yes. He’s adorable!

Oksana Salloum
Yeah, I know. I've watched that movie so many times and I watch it with my daughter. I adore Wall-E.

Jack Graham
Oh, you know what's actually more like a generative AI than any of the typical AI examples from movies? The computer in the Batcave is like a generative AI. Batman and Robin feed questions into it, and it gives them detailed, context-sensitive answers. I think that's the closest thing to a generative AI that I've seen anywhere, and it's from those silly 60s Batman shows.

George DeMet
That's very accurate. And for my example, I'm gonna pull from something even more obscure: the early 70s sci-fi cult classic film Dark Star. In that film, which is kind of a comedy about a crew in space, there's a scene where there's an artificially intelligent bomb set to explode. One of the members of the crew tries to reason with the bomb to keep it from exploding and destroying the spaceship. I won't tell you how it goes, but it is hilarious. It's a good Saturday night midnight movie if you're looking for something to put on. It's clever, funny, very low budget, but a fun film nonetheless.

Rob DeVita
Sounds great all around.

Jack Graham
How have I not seen this?

George DeMet
Thank you so much everyone for a great conversation. We'll be back in a few weeks with another fresh episode. In the meantime, check out our website at palantir.net for more insights. And of course, don't forget to like and subscribe. Thank you.

[Music plays]

[Audio clip from the movie “Dark Star”]

Doolittle
Now, Bomb, consider this next question very carefully: What is your one purpose in life?

Bomb #20
To explode, of course.

Doolittle
You can only do it once, right?

Bomb #20
That is correct. 

Doolittle
And you wouldn't want to explode on the basis of false data, would you?

Bomb #20
Of course not.

Doolittle
Well, then. You've already admitted that you have no real proof of the existence of the outside universe.

Bomb #20
Yes, well…

Doolittle
So you have no absolute proof that Sergeant Pingback ordered you to detonate.

Bomb #20
I recall distinctly the detonation order. My memory is good on matters like these.

Doolittle
Of course you remember it, but, but all you're remembering is merely a series of sensory impulses which you now realize have no real, definite connection with, with outside reality!

Bomb #20
True, but since this is so, I have no proof that you are really telling me all this.

Doolittle
That's all beside the point! I mean the concept is valid no matter where it originates.

Bomb #20
Hmm. 

Doolittle 
So if you detonate, you could be doing so on the basis of false data!

Bomb #20
I have no proof it was false data.

Doolittle
You have no proof it was correct data!

Bomb #20
I must think on this further.

Culture Design Development Open Source People Project Management Strategy

Let’s work together.

Have an exceptional idea? Let's talk and see how we can help.

Contact Us