Kajeet Executive Chairman and co-founder Daniel Neal shares his thoughts on ethical uses of AI both within and without Kajeet. Read this important essay, and watch the video below to hear Daniel discuss this critical topic in his own words.
We at Kajeet believe that every tool we humans use, whatever that tool is, has to be used by ethical humans. And a company, as with a person, needs to have ethical core values in terms of how they use tools to support their products and their customers, and to deliver value out in the marketplace.
Generative AI is a supremely powerful toolset which magnifies the effects and the intent of the users of those tools; in a way, AI amplifies the ethical capabilities and proclivities of its users. At Kajeet, one of the things we're putting a lot of thought into is how those AI tools can be used for tremendously good things, and for some very bad things -- and the gap between those two poles, if you will, is very wide and possibly widening. We hear this in the media all the time: People are discussing whether AI a good thing. or whether it's a bad thing. How dangerous is it? How can it be used for good?
To help answer these questions, customers in every marketplace need to assess their technology providers, their product providers, and those who are working with them to achieve their goals; and they have to be very pointedly assessing the quality of the character of the company and the people they're working with. It's never been more important for someone buying a very powerful AI tool, or a platform that incorporates AI, to really understand who's building that tool and who's behind it. There are many important questions that potential customers should be asking: Who are the people who are putting this out there, and what drives them? Are they an ethical corporate culture? Are they to be relied upon? Will they be transparent with us if something goes amiss, or if something just doesn't add up properly? And to take a larger view, will they share with us the benefits that are being gotten from the application of these powerful tools?
We at Kajeet are hyper-aware of the kind of scrutiny that our customers should put on our company as a positive, good actor in the marketplace that puts our customers' interests first, and that always uses our tools and our platform and the power of it for good results and for good things.
That's a pretty big word, "good," and it can be interpreted differently by different people. But we're mindful of it in every application, and that's a conversation we love to have with our customers. We serve many people who are revolutionizing the automobile industry through electric vehicle charging, we serve libraries, we serve schools, we serve Native Nations and Tribal organizations to deliver very important services for their people. And in every case, we at Kajeet seek to find customers where our alignment is very, very clear: And that alignment is to do good things in the world and make improvements in the world, and that's a very satisfying place to be.
Those kinds of biases can be very harmful and very dangerous. So in the engineering of prompts and in the structure of products that enable the prompting, Kajeet is supremely mindful of the need to be more sophisticated and thoughtful about structuring the product and the engineering of the prompts that we support so that we can continually eliminate bias that is untoward, unacceptable, and that is coming back to us from other material that may have been ingested into the AI machine, if you will. It is not something one can do casually. You cannot be passive when it comes to using AI: We all have a responsibility to bring not just our good intentions, but very mindful, ethical behaviors and thoughtful engineering to make sure that the tool is delivering value for all in a fair and equitable way. If we don't engineer the product and the prompts and our engagement with AI in a careful, thoughtful, ethical way, we're in for some very serious problems.
This was reinforced to me by a powerful study I saw where someone, for example, would input a very simple prompt for an image generator. (I won't name the image generator; they all do this, and they all have the same sorts of problems.) The prompt went something like this: "Produce an image of six accountants in a conference room." And yes, your suspicion may be right: When the image came back they were all white males, all of the same age group. They all basically looked like they were the same person, which raises an important question: What's going on there?
The answer is that there's a bias built into the AI as to who a typical account is. But ask yourself about the accountants you know, or people in accounting that you know -- people who both aspire to it or have been very successful in it -- and I think you'll find that that image of all accountants being old white males doesn't match reality. But that image was produced by an AI model based on a simple, apparently factual, prompt. Almost certainly, the developers did not intentionally train this into their Large Language Model -- and just as certainly, it reveals the need for both ethics and vigilance when AI products are being built, and reveals how significant the engineering of the prompt is as a way to combat this latent bias in all of these large language models.
So, the biases inside of AI must be combated by ethical, thoughtful engineering. And that has to show not just in the engineering of the prompt, but also in any models made available to our own customers and products, and to the services that we provide. Being an ethical company is a core value at Kajeet, and we bring that value to our own use of AI -- and we encourage you to do the same.
Have questions about AI, our core values here at Kajeet, or anything else?