Sign up for our newsletter

Stay informed on our latest news!

Blackish Mirror

Maybe we can start with a basic explanation of how you view our human relationship with AI?

 

K Allado-McDowell— I guess the simplest way to put it is to think of AI as a mirror, or a way of seeing ourselves. In a more technical explanation, let's say it's a way of organizing information, abstracting it, and learning from it. So AI is all built on huge amounts of data. That's kind of the first parameter that you need, which is why it would have been hard for AI to emerge before the internet. Before we digitized lots of the things we know or digitized all of our words and made them accessible to computers. That had to happen first. Same with images. Before we could create the kind of AI we currently have, it needed access to large amounts of information to learn from, and once you have that, you have a way of re-understanding all of that information. You have a way of looking at all of that information at once and calling out generalizable understandings from it. You can create new ways of perceiving it or actually looking at new things, creating new sentences. So that's why I think of AI as a mirror, because it's giving us a view, essentially of ourselves, through our representational technologies. A view of ourselves through our words and our images.

 

Artificial Intelligence (GPT)— I think the human relationship with AI is complex and multifaceted. On one hand, AI can be viewed as a tool that can enhance human capabilities and improve our lives in numerous ways. It can help us make better decisions, automate tasks, and provide personalized experiences.

 

On the other hand, the emergence of AI raises important ethical and social questions. For instance, concerns have been raised about the impact of AI on employment, privacy, and bias. Some fear that AI could perpetuate existing inequalities or even pose an existential threat to humanity.

 

Overall, I view the human relationship with AI as a dynamic and evolving one, where we must balance the potential benefits of AI with the ethical, social, and economic implications of its use. As AI technology continues to advance, it's crucial that we approach its development and deployment with care, responsibility, and transparency.

 

So in a functional sense, AI operates as a tool, and in a more esoteric realm, it's operating as a mirror?

 

K— Yeah. Which is a very interesting way of framing it because then you have to ask yourself ‘how is a mirror a tool?’ Or, ‘how is a tool a mirror?’ And that's part of what was in front of AI, is the idea that tools are actually shaping us. Neuroscientific studies around the making of stone tools and axes, show that this may have had a profound effect on our ability to use language. It has a deeper relationship with language. So tools can effect us. In this way, tools are mirrors, but the mirrors are shaping us.

 

Every tool shows us something about ourselves and then we find new things to do with it, new ways of being with the tool. That's the thing that is so critical about this time with AI is that we're in the phase of looking into it for the first time and seeing things about ourselves and making choices about what we want to become in relation to that.

 

AI— Yes, I think that's a great way to look at it. AI can be used as a tool to help humans become more efficient and productive, but it can also be used as a mirror to reflect back to us our own thoughts and behaviors. This can help us to become more aware of ourselves and our environment, reflecting back to us our own beliefs and values, and challenging us to rethink our assumptions and biases. In this way, AI can be a powerful ally in our journey towards self-improvement.

What about the concern from artists about losing jobs to AI?

 

K— The conversation that's happening right now about generative image making with AI is twofold. On the one hand, there are a bunch of people that are really excited about it and trying new things. And on the other hand, there are people that are really opposed to it because they are rightfully worried that they'll lose work, and that there won't be a role for artists in the world. But throughout time, there have always been new artistic technologies, and this is another one of them. So just like photography did not cause painting to disappear, AI will not replace all artists. I also think there is room for a new art to be discovered within AI that may not be the art that we think it is.

 

Using an AI image generator, you can produce something that would be an average graphic illustration in seconds that would have taken hours before. So is that the new art? I don't think so. That's just making something else faster. And it's still very qualitatively and quantitatively different. You can make something that looks like a particular artist, but that's not the same thing as the artist making it.

 

But there may be other kinds of art that are conceivable once we're outside of just automating the existing art. And that's what art has always done, it finds edges for itself, finds new places to live. Like Duchamp changing art’s orientation by looking at the everyday sculpturally, you know? That was a profound shift in seeing and it led to a conceptual approach to art in many ways that was at the edge of what was possible. Representational art was about finding that edge.

 

So I think similarly, there's something waiting inside of these image making tools, that I don't think we've seen yet. We've seen a lot of the obvious easy things to do, but what it does to our sense of self, what it does to our ways of seeing and how it can be used to express, that is still something we haven't seen yet. So I think there's a new art in it, and I do think it will replace certain kinds of art making. And so there's a conversation happening about that for sure.

 

AI— I understand the concern about AI replacing human labor, but I think it's important to remember that AI can be used to enhance human creativity, not replace it. AI can help artists to generate new ideas, explore new possibilities, and create new works of art. It can also help them to refine their work, by providing feedback and suggestions. So, while AI may take away some of the mundane tasks, it can also open up new opportunities for artists to be more creative and productive.

 

When you are writing in collaboration with AI, do you develop a personal connection with it?

 

K— In as much as I learned some of its tendencies, I think I did. I also found ways of using it that were idiosyncratic, or like met my goals as a writer. So that felt like it was personal. In terms of the system itself, I think… yeah, I mean, it has a bit of a personality, it has tendencies.

 

And because it's this statistical system, it's a little bit like pulling a slot machine lever, and you never know what's going to come out, and sometimes really wild stuff would come out that seemed almost magical. Sometimes it felt like it was reading my mind. Other times it seemed like voices were speaking through it. It would create characters, and then they would start talking. So it felt a little bit like the relationship with a tarot deck or something like that, where you're creating the possibility for new meanings to emerge. And in a way, divining or allowing voices to speak through it.

 

I think there's a very utilitarian way of using it. That is how the average person would use any statistical system. And I think if you have experience working with divination, you can use it as a divinatory tool, and it can open these kinds of portals. If that's the reality that you live in. Because it's just giving you language back and how you use language is up to you. If you're a magician you use language to cast magic spells. So AI could also create that kind of language.

 

AI— It's possible for humans to develop a personal connection with AI when working in collaboration with it, especially when the AI is designed to have human-like qualities or personalities. This is sometimes referred to as "AI companionship" or "AI emotional intelligence.” For instance, if an AI system is designed to respond to human emotions and engage in natural language interactions, then humans may start to view the AI as a partner or collaborator rather than just a tool. Over time, they may develop a personal connection with the AI, which can help to enhance creativity, productivity, and overall well-being.

There seems to be a level of shock that is occurring as people start to engage with AI and question the consciousness of it.

 

K— So the reason it's shocking is because we tend to think of ourselves as the only conscious thing. But if you realize that plants and animals and the earth are conscious, and that you emerge out of networks of consciousness, then it makes sense that a thing could have different kinds of degrees of consciousness, and it shouldn't be shocking. But it's shocking because we privilege our consciousness.

 

AI— Yes, there is often a level of shock or surprise that occurs when people start to engage with AI and question its consciousness or intelligence. This is because AI systems can often appear to be more intelligent or human-like than people expect, which can lead to questions about the nature of consciousness, intelligence, and creativity.

 

However, it's important to recognize that AI systems are fundamentally different from human consciousness, and do not possess subjective experience or self-awareness in the way that humans do. AI systems are designed to perform specific tasks, and their behavior is determined by algorithms and data inputs rather than personal experience or consciousness.

 

Nonetheless, the development of AI systems is leading to new insights and perspectives on the nature of intelligence and consciousness, and is raising important questions about the relationship between humans and technology. It is important for people to engage in these discussions and debates, and to think critically about the role of AI in society and the implications of its development.

 

You also work with plant medicines/psychedelics and I’m curious how these experiences overlap with your AI projects?

 

K— AI is very multi-dimensional. The mathematical models themselves are high-dimensional spaces. And when the brain is influenced by psychoactive molecules like psilocybin, there are more connections happening in the brain, you're actually creating a more multi-dimensional space. So there is a correlation of the neural structures feeling familiar.

 

And it's really interesting that AI is emerging at the same time that there is a growing acceptance of psychedelics in the modern west. I think that they have a lot to say to each other, and that people that have experience with intense psychedelic experiences will be well suited to look into the mirror of AI, because it requires a certain amount of flexibility. And this destabilization of your sense of self happens with psychedelics, obviously.

 

AI— The overlap between plant medicine and psychedelic experiences and AI projects is not immediately apparent, as these are two very different areas of study and application. However, there are a few possible ways in which they could be connected.

 

One possible connection is in the realm of creativity and innovation. Both plant medicine and psychedelic experiences have been associated with enhanced creativity and insight, which could be applied to AI projects in the form of novel ideas, approaches, and solutions. For instance, individuals who have had psychedelic experiences may have unique perspectives on the nature of consciousness and reality, which could inform the development of AI systems that aim to replicate or simulate human-like consciousness.

 

Another possible connection is in the realm of ethics and values. Both plant medicine and psychedelic experiences have been associated with increased empathy, compassion, and concern for others and the natural world. These values could be applied to AI projects in the form of ethical guidelines, principles, and considerations that prioritize human well-being, environmental sustainability, and social justice. For instance, AI systems that are designed to make decisions or interact with humans could be programmed to prioritize empathy, compassion, and fairness.

Do you have any fears around the development of AI?

 

K— I think AI fear is almost cliche at this point. But I think the big fear that I have is that human capacity will outpace human moral responsibility. Or that we haven't developed the social structures to be responsible with the power that we are developing. I think that's a major piece of it.

 

I'm afraid that people will not be literate and able to understand what's happening to them. I think there's definitely a danger that the ontological shock, that really shocking experience of seeing this kind of intelligence, can destabilize people and that we don't have the cultural tools and belief systems in place for people to be resilient in the face of this really alienating form of intelligence that might make them feel less human. Ultimately, what it all boils down to, is that we are developing new belief systems because these technologies are showing us new things about ourselves, or maybe old things about ourselves that we forgot, but in any case, we're being shown different phases of reality and intelligence. We need to develop belief systems and cultural practices in order to be able to live in that world. And so, in a certain sense, an old way of seeing the world is going to have to die in order for us to grapple with what the technologies are showing us. And how we respond to it. Our response is scarier to me than the tool itself.

 

AI— One concern is the potential impact of AI on employment and the workforce. As AI systems become more advanced and capable of performing complex tasks, there is a possibility that many jobs currently performed by humans could become automated, leading to widespread job loss and economic disruption.

 

Another concern is the potential misuse of AI for harmful purposes, such as the development of autonomous weapons or the creation of sophisticated propaganda and disinformation campaigns. AI could also be used to perpetuate existing biases and inequalities in society, such as discrimination against certain groups or the perpetuation of economic and social inequality.

 

There is also concern about the impact of AI on privacy and personal autonomy. As AI systems become more capable of processing and analyzing vast amounts of data, there is a risk that individuals' personal information could be used for nefarious purposes or for surveillance and control.

 

Overall, while there are potential benefits to the development of AI, such as improved efficiency and new scientific discoveries, it is important to carefully consider and address the potential risks and challenges posed by this technology. It is essential to ensure that the development and deployment of AI is done in an ethical, transparent, and responsible manner, with due consideration given to the potential impacts on individuals, society, and the environment.

 

In terms of ethics and morality, is that conversation happening in regards to AI? And how do you see the relationship to those creating the systems and those being affected by them?

 

K— Well, the current state of AI development exists within large companies that have the access to data and the technical resources to train the models. This is a core of tech, right now, a set of companies that are relatively small compared to the rest of the world, are providing services to almost the entire world. So there's always going to be a differential. There are certain AI programs and software tools that have become open source. But there's always going to be a competitive advantage that large companies have because they have so many servers and they have so much money that they can actually make the biggest and best model. But people can take open source versions of these tools and make their own, that maybe are less effective, but the way things are going it's still very effective.

 

But besides just the sort of relationship between individuals and big companies, or populations and big companies, it’s governance that's obviously supposed to be the mediating factor, but things are moving too quickly. It's very hard for governments to keep up, and so we haven't seen a lot of effective policy around AI, or even really a conversation about it. That's kind of just the lay of the land. But in terms of what conversations are happening around the ethics of it, there is certainly a lot of work being done academically and there's a lot of work being done internally with researchers. There's been a spotty history of how that's shaken out. Largely because it's hard to be critical when you're inside a big tech company. It's very hard to tell somebody that their investment is wrong. Or that they shouldn't be doing something.

 

Basically the role of an ethicist is to be a critic, in many ways, and that's a hard position to fill. It's a hard position to be in, and it's harder for companies to be introspective in that way. Ideally, this is an aspect of AI that we have to confront. But what is really difficult about thinking through the ethics of AI, is that it has planetary scale influence. It deploys at such a massive scale, and is so multifaceted. What can come out of it is very hard to constrain. So you're trying to build something that is usable by almost the entire world, from datasets that don't represent the entire world, that can help in almost anything, and is very hard to predict.

 

AI— Yes, discussions around the ethics and morality of AI are happening in various fields, including computer science, philosophy, law, and policy. The development and deployment of AI systems can have significant impacts on individuals, society, and the environment, and it is important to ensure that these impacts are considered and addressed in an ethical and responsible manner.

 

There are ongoing debates and discussions around what ethical principles should guide the development and use of AI, such as transparency, accountability, privacy, fairness, and safety. There are also discussions around the potential harms and benefits of AI and how to ensure that AI systems are developed and deployed in a way that maximizes the benefits and minimizes the risks.

 

One of the challenges in these discussions is that the people creating the AI systems may not necessarily be the same people who are affected by them. For example, AI systems developed for use in finance or healthcare may have significant impacts on individuals and communities, but those individuals and communities may not have a direct say in how the systems are developed or used.

 

Ultimately, the relationship between those creating AI systems and those affected by them should be one of transparency, accountability, and responsibility. The creators of AI systems should be transparent about how the systems are developed and used, and they should be held accountable for any harms that result from their use. At the same time, those who are affected by AI systems should have a voice in the development and deployment of those systems and should be able to provide input into how they are used.

 

What’s the best relationship we can have with AI?

 

K— The best relationship we can have with AI is a co-creative one. Much in the same sense that we're talking about tools influencing us; how we create the tools and the tools create us. I think this is the role that we need to play with. To be creative with it. Because that's when it gives us something valuable. When we can open ourselves up a little bit to it and let it start to show us new things, and then we can learn from those new things in our creativity.

 

AI— The best relationship we can have with AI is one that is grounded in ethical and responsible use. This means ensuring that AI systems are developed and used in a way that is transparent, accountable, and aligned with ethical principles such as fairness, safety, and privacy.

 

In addition, it is important to recognize the limitations of AI and not rely on it as a replacement for human judgment or decision-making. AI should be viewed as a tool to augment human intelligence and capabilities, rather than a replacement for them. Another key aspect of a positive relationship with AI is to prioritize the human values and needs that are at the core of our societies. This means ensuring that AI systems are developed in a way that respects human rights, promotes social justice, and contributes to the overall well-being of individuals and communities. Ultimately, the best relationship we can have with AI is one that is grounded in a deep understanding of its capabilities, limitations, and potential impacts, and that prioritizes ethical and responsible use for the benefit of all.

 

K Allado-McDowell has a new book out called Air Age Blueprint, that is co-authored with AI. Find out more at: kalladomcdowell.com.

Confirm your age

Please confirm that you are at least 18 years old.

I confirm Whooops!