Festival publication

Table of Contents

1.    The Civilization of Algorithms
2.    An Augmented Society
3.    To What Extent Will We Allow Artificial Intelligence
4.    Real and Imagined Technologies
5.    The Hybrid Society is Coming
6.    The Computer is an Amplifier of my Capabilities

Wiktor Gajewski

Why does artificial intelligence not have hair? The images that search engines offer in response to the keyword "AI" are robots with shiny plastic plates drawn in such a way that the face mask reveals a skull filled with electronics. Or the profiles, mostly feminine, made up of a waterfall of digits – instilled in our collective imagination by "The Matrix". Only the colour has changed – now the mandatory colour of innovation is blue. Ava, from Alex Garland’s “Ex Machina”, wears a wig only in the last minutes of the film, as a gesture of closing the process of her emancipation. In this light, I believe that Google has acted deeply maliciously, choosing to demonstrate the strength of its voice assistant, Duplex, precisely to schedule a visit to a hairdresser’s…

Setting aside the jokes, we are indeed at a time when the development of technology requires a reshaping of our imagination. The images that culture has provided us with since the beginning of our civilisation are becoming obsolete. Artificial Intelligence will not stand in front of us as Galatea, Pygmalion’s ivory wife, the clay Golem, the wooden Pinocchio, or the Tin Scarecrow from the Land of Oz or C-3PO from a distant galaxy. The human body as an interface for a thinking algorithm can only become a fad or fetish. Sooner – and this is happening – we will meet digital intelligence in the car, in the military drone, in the doctor’s surgery or in the insurer’s office.  Just like we come across it with a watch, a washing machine, a light bulb, a television or a telephone. We are on the verge of an Algorithm Civilisation, a world that we are shaping on an equal footing with increasingly advanced digital technologies.

Artificial Intelligence is an umbrella term for many of them. We have learned to teach programs so that they make their own decisions and develop their own codes, feeding on the vast amount of data we generate in the digital world. We give them the eyes of lidar, ears of microphones, and mouths of loudspeakers. Successive programmes support or replace us in our daily choices – the next series, the next song, the next president – to suit our online "likes". We are confident in talking to virtual assistants, perhaps trusting them more than a human being. We let them read our emails, our faces, our pulses. At the same time – and I am an excellent example of this myself – we know less and less about what is happening "inside" the devices and service applications that we so eagerly surround ourselves with. We get a sense of comfort, but don’t we lose the sense of agency?

The Przemiany Festival can help you answer this and other key questions. Whose interests are supported by Artificial Intelligence? To what extent do learning algorithms take over the values and prejudices of those who train them? Can we cope in a world where robots and programs will take over part of our professional work? How should we take care of our interpersonal relationships if we create them using mainly our own virtual avatars in social applications?

We invite you to not only learn more about new technologies during these few days, but also to take part in this extremely complicated debate. We want to combine expert knowledge with your personal perspective. Technology developed by specialists has a rapid impact on our lives. We should be able to engage in dialogue, and confront our values and expectations for the future of the world. This will be served by the festival debate, where we will ask you to test the visions of experts concerning the development of artificial intelligence and its impact on the shape of society. During lectures and panel discussions you will be able to directly ask questions to specialists dealing with AI, both in the technical and ethical dimension. A visit to the "Machina Sapiens" exhibition, where you will see projects by artists and designers from all over the world, will open you up to new ideas and stimulate your imagination. You will also meet entrepreneurs and innovators who implement artificial intelligence solutions in real life. But you are the Heroes of Przemiany – the success of the Festival depends on your creativity, courage and openness. Ask questions, even seemingly foolish ones, like Artificial Intelligence hairstyles. I look forward to talking to you.

Aleksandra Przegalinska

The years 2017-2018 for the broad field of artificial intelligence – including robotics, machine learning of many types, language processing and machine vision – turned out to be a turning point. Developed by DeepMind, the AlphaGo program first defeated the master of the traditional Chinese Go strategy game, Lee Sedola, 4:1, although the game was previously considered "non-algorithmic". It then fell in a battle fought with an improved version of itself (AlphaGo Zero), revealing new, hitherto unexplored possibilities of playing Go. 

Also in 2017, Sophia, a humanoid robot, travelled around the world as a conference panelist, getting people used to the view of humanoid as a speaker, and – moreover – was granted citizenship of Saudi Arabia. To this day, we still do not know what this actually means in her case and what rights and obligations have arisen from this for her.  That’s not all – the OpenAI program co-funded by Elon Musk almost crushed human players in the popular video game Dota 2.

In 2017 one more very interesting thing happened. Social media circulated electrifying, but also terrifying news. In an experiment in Facebook’s research and development department, bots that were to learn to negotiate in English in order to maximise the benefits of the exchange of goods spun "out of control" – the phrase itself illustrates our profoundly complicated attitude to advanced technologies – and created their own language (or rather metacode), after which they began to communicate with each other regardless of the rules governing the logic of human speech. It’s worth returning to this particular situation and remembering the dialogue which, at the request of man, but without his supervision, was then established by bots conventionally called Alice and Bob:

Bob: i can and everything else … … … … . .
Alice: balls have zero to me to me to me to me to me to me to me to me to
Bob: you i everything else … … … … . .
Alice: balls have a ball to me to me to me to me to me to me to me
Bob: i i can i i i everything else … … … … . .
Alice: balls have a ball to me to me to me to me to me to me to me
Bob: i … … … … … … .
Alice: balls have zero to me to me to me to me to me to me to me to me to
Bob: you i i i i i everything else … … … … . .
Alice: balls have 0 to me to me to me to me to me to me to me to me to
Bob: you i i i everything else… … … … . .
Alice: balls have zero to me to me to me to me to me to me to me to me to

Sounds like a misunderstanding? However, the right eye will notice the specific logic of this exchange and the fact that it clearly leads to something. It is not meaningless. On the contrary, it serves to optimise the precise exchange of goods. And this non-human new speech – hopefully not "newspeak" – is something we may have to get used to. Just like the new orders introduced by high technologies in various fields previously managed by people. 

As the example of the Go game shows, what today is not algorithmic, may turn out to be such tomorrow. Already in the second half of 2018, the aforementioned Elon Musk announced the imminent implementation of the Neuralink project, the first complete brain-machine interface in the form of a micrometre-sized device that would be surgically implanted into the brain. According to Musk, this technology gives the capability for technologically enhanced people to beat AlphaGo Zero and other even more advanced projects. 

Whether we want it or not – both individually and collectively – in our society, and especially online, there are new entities that were once quite clearly classified as objects. This publication is also about this. "The Civilisation of Algorithms" is an attempt to sum up this turbulent period, boldly looking to the future – to what is on the horizon. In view of all the possible scenarios of further technological development, it remains to be hoped that technological augmentation will not be an extension of the battlefield, but the development of a range of best practices in the field of human interaction with each other and with artificial intelligence. For this to happen, the technology must become as inclusive, accessible and explainable as possible. 

Luciano Floridi
To what extent will we allow artificial intelligence?

Adam Mandziejewski: A virtual Google Duplex assistant recently turned out to be able to call and arrange a visit to a hairdresser for its owner. This amazes me. Could you share your opinion on the development of artificial intelligence? Where is it going and what awaits us there?

Luciano Floridi: It’s always hard to guess what’s ahead of us in the future. It is even more difficult with AI, because so many promises have not been fulfilled, that we have to be careful not to exaggerate again with enthusiasm. Sometimes I am afraid that a kind of bubble of artificial intelligence has been created, in which we expect a lot from this technology. But there are already a few phenomena on the horizon and some things we can learn from the past. Maybe we should start with those.

We need to look at where technology, whatever it may be, can help to achieve something really important. It may sound foolish, but sometimes we speculate whether technology can achieve this or that, but in fact we do not need it or it is not important to anyone. At home I have an old, broken popcorn roasting machine and I have not bought a new one because I do not really need it. But I also have a toaster that I use regularly and if it breaks down I will buy a new one immediately. So the first criterion for assessing the success of artificial intelligence should not be where it can be used, because the possibilities are practically unlimited, but you need to consider whether such an AI system is worth buying again. In the event of a breakdown, I would immediately buy a fridge, and a dishwasher. But a popcorn machine? In the case of AI, there are essential features for which we will certainly want to "buy" something again. For example, it makes it easier to drive a car. Easier shopping thanks to tips. Improved credit card security with pattern recognition technology. An enhanced interface for photos, videos and social media – all thanks to Artificial Intelligence. The areas where we can expect AI to have a significant impact and where development is already underway are exactly the ones that can be expected, as they are always in the vanguard. Defence and security have always been the driving forces behind technological advances. Similarly, health care is of interest and funding is available. There are also two huge and quite obvious areas: business and, where this can happen even more quickly and to a greater extent, entertainment. Computer games, video streaming, the whole social media industry – this is a place where artificial intelligence will be more and more influential. But in all these areas we should stop thinking in the manner suggested by science fiction movies in which robots are just waiting to greet us upon our return home. Already we use a number of solutions based on AI, at various times in our lives. If you ask a man in the street if he has ever used artificial intelligence, he will probably say that he has not. But if you ask him if he has taken pictures with his smartphone, the answer will be: ‘Yes’: "of course!”. And there is also an AI element there. If you ask him if he has bought anything online and received any recommendations, the answer will again be yes. This is also artificial intelligence. Who doesn’t have Netflix? How else will it guess our taste other than with AI? It is a very simple form of artificial intelligence where we use digital technology to learn from data and make decisions or to process larger amounts of data autonomously to facilitate other processes. That is why I think that more and more AI technology will invisibly penetrate our world.

I think that, with the broad application of the AI that you mentioned, there is now more than just a bubble: national governments and international organisations are drawing up AI development strategies at a very wide range of different levels. I know that you have cooperated with the European Commission in making recommendations for AI. Could you tell us about this process?

First of all, I would like to make it clear that I do not think we are dealing with a bubble at all. Nor do I think that we will be completely disappointed with the solutions that AI will be able to provide. I am just saying that we should pay attention to how it is advertised – it gives the impression that artificial intelligence is a miraculous panacea for all ailments. There is a gap between what is achievable and what we will do and what we expect and hope for. However, I believe, as does everyone around me, that this will happen. We will not deviate from some of the solutions any longer.

And precisely because AI has such a powerful influence, has spread so widely and will change so many areas, it would be good to have some normative framework and Artificial Intelligence ethics at our disposal. It is not just a question of listing what is allowed and what is not, because it is not a question of restrictions. It is also about opening up new opportunities. Imagine a company that does not use artificial intelligence to improve the financial health of its customers because it is afraid of making a bad move, a negative response or unflattering headlines in the press. Yet it is about the possibility of improving financial services for a multitude of people, so this hesitation should not happen. Our work will mainly be carried out at the European level, which is important for all Member States, but its effects can be transferred to other contexts – to the USA, Japan, Brazil, China and also to the UK (depending on Brexit). And I hope that we will succeed in two things. Firstly, we will set clear limits on what should be banned – as in agriculture, the automotive industry or the mining industry. Real red lines that must never be crossed. Secondly, we will try to make sure that everyone understands when it is necessary to decide for certain activities to develop the AI field and that opportunities should be seized because, as I well know as a former Catholic, abandonment is a sin as much as doing evil. And when it comes to artificial intelligence, there are many opportunities that we should not overlook.

The framework for the development of artificial intelligence is therefore in place. But perhaps we can move on to the personal and social agency in cooperation with AI, which is already happening. In your research, the notion of an envelope appears: what is it about?

It is important to understand why artificial intelligence is working so well today – unlike in the past. There are many factors, including the fact that we are no longer talking about the symbolic dimension of AI, but working with neural networks and deep learning. But one of the factors that determines that AI or digital technology works better at all is the existence of a friendly environment. Autonomous cars without drivers work well because there are sensors, maps, satellites – the whole world is IT friendly. When we introduce a robot into a given industry, we are talking about the physical three-dimensional space in which it operates successfully as the envelope of that robot. For example, a robot must paint a car – the space in which it works, while painting a specific model, is an envelope. Recently, I have been popularising the belief that, as the world becomes increasingly digital, billions of images on the web, maps, sensors, people constantly connected to the network, the Internet of Things, and so on – the world is becoming an ever larger envelope, within which artificial intelligence is operating. It can be said that AI is not doing well in coping with the world, but that the world has become easily accessible and AI application friendly. For example, the recognition of faces and images is made possible by tens of thousands of online cat pictures on which the software can be trained. Without this wealth of images, not much could be done with such software.

Is that why this envelope is not made up exclusively of equipment, principles and policies, but also of human activities in the digital world, of our recommendations and opinions?

That is exactly how it is. In the past, one of the aims was to establish an artificial intelligence capable of translating into many languages. As long as we tried to do this as a bilingual man would have done, we failed. And now, using Google Translator, which copes quite well, especially with texts in official language, without slang and regionalisms, you can see that it works. And it had to work because of the huge quantity of texts available on the web in digital form. We must realise that it is also us, our behaviour, who we are digitizing more and more. That is why I sometimes talk about "on-life", because our existence is not completely offline, not completely online, but somewhere in between. We are both analogue and digital, and digital technology works very well in this infosphere.

This also changes our approach to the goods available on the market. We are not only consumers of technologies or applications based on AI, but we are also a source of data for them, giving them power through information about where we like to be, what we like to spend money on. Does this not change the concept of the free market consumer?

That is very much the case. It used to look like this: when my grandparents went shopping, the local shopkeeper knew them and their habits – he knew what they used to buy on Mondays, what they didn’t buy on Tuesdays. It was human knowledge. Then we tried to automate the process. There came the era of leaflets in mailboxes that did not work at all, because the process depended on guessing what millions of people would like in the catalogue. Today we have a much more individual approach, because we are the first generation to share their data voluntarily on such a scale. So everyone knows what car you like, what sports you like, what pair of shoes you buy, what news on social networking sites you like, which football team you support. Such a profile becomes extremely valuable for any technology that can use it to develop recommendations, suggestions, or advertisements. So we were the first to be placed in this infosphere as digital entities and are "read" by technology like digital books.

It is said that AI is really changing the way we live. And how do artificial intelligence and information and communication technologies affect your work? You are both a philosopher and an educator. What is AI for you?

Because I’m quite old, I grew up with a philosophy based on the old AI concept, the symbolic and logical one, built on lines of code. Neural networks have been available for a long time and have been discussed in the academic world, but they have not been a popular tool used by companies and governments. This was a source of intellectual challenges for me, because I have always seen AI as a form of agency different from human or animal. It is not a biological form, as in the case of my dog, and it is not a human form, as in the case of my friends, me or my wife. It is not even a company’s agency of an institution where people work together as part of a team, such as a sports team. This broadens our understanding of what it means to be an agent (able to do things). But I emphasize, there is no scientific fantasy in it, I am not talking about Robocop or Terminator. This is something that I leave to trivial considerations. I am talking about a real ability to interact with the world, to change that interaction by transforming data into new processes and strategies, and thus to learn, in the machine sense, from the world. If we take these three elements: autonomy, interaction and learning, and remember that we have technology that incorporates them in such a daily sense – without science fiction, like a robot lawn mower, for example – then there will be the possibility of philosophically undermining many of the ideas that have existed for a long time. Responsibility: what do we want our interactions with this new form of agent to look like? What responsibilities and spheres of life do we want to pass on to it, and what do we want to preserve for ourselves? Control: how can such agents be involved in the common good? These are all fundamental questions that we have been asking ourselves since the time of the Greek philosophers, but now they are gaining a new face – because a new figure has appeared on our chessboard. Even if I think it’s small, it’s a completely different game. Therefore, philosophy now has to face the digital revolution in general and artificial intelligence in particular.

You have actually described the place and role hitherto reserved for the human being.

You mean agency? Yes and no. Yes, because when we didn’t have robots, AI and technologies, we had to do everything ourselves. When we look at Aristotle’s description of slaves, it is enough to replace the word ‘slave’ with ‘robot’ and everything fits. When we didn’t have the robots, we used each other. Then we used animals, wind and water energy, and finally engines and electricity. Today, we have something better, capable of performing the same tasks, and even many others. I hope that we will prove to be suitable agents for controlling this type of agency, because I believe that, in the end, it is a matter of human choices and decisions. It is not about who does what, but about who decides what to do and whether to do it. Let the lawn mower robot be an example again. This is a task that needs to be carried out. But when and how? Is this the right time? Maybe not today, because the robot makes a noise and I don’t want to disturb my neighbours? That is my job, my decision, my choices.

I see here a lot more power, decisions and choices in human hands, and that is followed by ethics and politics. How many nails do I need to drive into this board? I don’t mind the robot doing it for me, but it’s my role to decide whether or not to drive these nails in. In my opinion, AI increases the burden of control that we have and the responsibility to control this new technology. Human intelligence is even more necessary, not less. It is even more important in terms of decision-making, planning and strategy. One more analogy: there is a dishwasher in the kitchen. But what will it take? A cup from my grandmother? No, because it is too valuable and I do not want something to happen to it. How much detergent to add and when, whether the washing is finished, whether the machine works as intended, whether it is broken or not, when it is replaced, these are all my decisions. Do you say that I don’t have to clean the dishes anymore? That is true and I am very pleased about it! But who manages the dishwasher?

Can AI give us more empathy in relation to our environment and social relations?

This is a very good question. I think things can go very differently. I hope empathy will increase, but who knows? During the recent elections in Italy, I made an open suggestion to the new government that green technologies and environmental values should start to go hand in hand with blue, that is, digital technologies and innovative values. These values can interact with each other. I hope that when this happens, we will maintain a more intelligent, human and empathetic relationship with each other, with nature, with the world created by man. But we should start working hard to achieve this.

In conclusion, I would now like to ask a question: if we combine all the elements we are discussing: digital technology, artificial intelligence, the development of new forms of agents, what will we get? People think the challenge is in digital innovation, but the problem is in managing the digital world. And what we do about it. Every company can buy a start-up, every company can employ three fantastic people and set up a small company, perhaps it can also create some great gadgets, etc. But the core of the problems ahead of us is what kind of project we want at the moment. This can be summarised in two points. First: our current challenge is not to innovate digitally as such, but to manage the digital. The second is that in order to do this, we need a few projects that we do not yet have. For example, we are operating in Europe on the basis of an old programme that has been inherited and continued since the Second World War. But what future do we want to build? If we do not have a human project, it becomes more difficult to manage the digital world, and the innovations we are aiming for are a little more complicated because we are basically moving in the dark. The direction taken means that policy has a necessary and fundamental role to play. With a big "P", real planning and reflection on the future we want to live in. I think that Europe, if it wants to, can play a big role and, despite all its limitations, I have a very pro-European attitude. And I hope that this will happen.

I think this is a perfect conclusion and a good moment to end our conversation. Thank you very much.

Prof. Luciano Floridi lectures on information philosophy at Oxford University. He is the author of the books:  Augmented Intelligence – A Guide to IT for Philosophers, Philosophy and Computing: An Introduction and Internet – An Epistemological Essay.

Real and imagined technologies

ALEKSANDRA PRZEGALIŃSKA: What stage is the work on artificial intelligence at? Is it still the beginning or are we approaching a breakthrough?

JEAN-GABRIEL GANASCIA: There is no easy answer to this question. At every step we see an extraordinary development of artificial intelligences, and their increasing computational power enables successive practical applications in our lives. That will what progress. The world is changing, and so is artificial intelligence. It is on this that further social development is largely dependent, which people usually do not realise.

The Internet is a good example, but it is also a useful in phone face or fingerprint recognition.

- All this is Al. Programming using intelligent techniques is also on the increase. Although this is not always visible at first glance, artificial intelligences are changing our world.

Our society is digital and dependent on the flow of information, and the data generated by non-human factors can only be analysed by non-human techniques. We live in an information society where it is important to be able to grasp rumours in order to anticipate the future. Software developers understand the huge role of feedback in product development. And still in the mid-1990s nobody knew what the economic model of the intemet would look like. Especially in the USA, where people were very excited about the idea of the web, but nobody knew how to make money from it.

The whole idea was initially created in the Hippie spirit. Technologies and the atmosphere in California were very open.

Until it was realised that it is possible to profit from the data itself. For example, earning money from advertising is difficult because the audience is broad and the return on investment is low. But if we direct our advertisements to specific audiences, it becomes interesting. And that’s just one example of the use of intelligent agents in modern economics. Perhaps in the future artificial

intelligence can be used in conjunction with autonomous devices. But many people are afraid of such machines, believing they will become like golems – independent and with free will. However, these concerns are not supported by the facts. Rather, they refer to a very primordial fear, which appears already in ancient stories. Did you know that the word ‘robot’ was invented in the Czech Republic?

Yes, it was invented by the writer Karol Čapek.

It is interesting that Čapek came from Prague, the birthplace of the Golem. This legend tells the story of the power of man who equals God, even though his creation is contrary to nature. However, it is important to distinguish between imagination and reality. The world is changing because of artificial intelligence, but not as various traditions have planned it.

What do you think about the discourse on the singularity that Raymond Kurzweil talks about – the predictions that perhaps we can expect a breakthrough in 20 years’ time?

I have been interested in the subject of the singularity for years now. I am still amazed that so many people, especially in large companies, support this hypothesis. For me, that is madness. I asked philosophers why they were not trying to negate it. They laughed that it could not be taken seriously. I tried to discuss it, I was suspicious and I had concrete, technological arguments, but nothing came of it. Have you read Raymond Kurzweil’s books?

"The Singularity Is Near"?

For example. All his books are similar. He writes that in nature evolution takes place exponentially – similarly in technology, because this is how the whole world works. Moore’s Law is the crowning argument of his theory. That is something that can be discussed, and I am trying to do it. So far, everything indicates that Moore’s law has reached its limit and so it will remain. But even if it turned out otherwise, it does not change anything. The enormous speed of a computer is not associated with the creation of consciousness or the ability to upload your own into it. From a philosophical point of view, it is strange to talk about transferring consciousness to the machine, because this means that you are a materialist and your mind can be completely separated from the body.

As if we didn’t have phenomenology and had never discovered the intelligence of the body. As if these two things could be completely separate.

Exactly. There is also the argument that machines learn for themselves and that the knowledge that is acquired in this way is much more efficient than ours. Yes, the machines will be smarter than us. Even without consciousness they will make better decisions – that’s why we pass this power and responsibility onto them.

Thomas Kuhn created the concept of the "paradigm shift". In the history of discoveries, this meant an awareness of the revolution that was taking place, overturning the existing ideas and concepts. I am aware of the power of machine techniques, but I also know their limitations. The machines themselves will certainly not change the paradigm. But cooperation in many areas is possible – for example in biology or physics, artificial factors can play an extremely important role. In the humanities and culture sciences also. I myself work with literature experts to develop a new interpretative tool for understanding traditional literature.

What do you think about deep learning? Before implementing these technologies, should we consider how to make them more transparent in order to avoid risks?

Deep learning is a very powerful technique and we lack the semantics to describe it. We simply do not yet know exactly how it works. There is no theory of deep learning, which is problematic from a scientific point of view. It’s funny, because the history of neural networks is very long. Their first models were already built in 1943. It was an exceptionally difficult undertaking. It turned out that with a three-layer network, you can get any configuration you want, fully functional – the model was versatile. But the configuration of such a network is very difficult.

After all, deep learning has proved to be very practical, so it is all the more fascinating that we have no theory of how it works. The statistical results show that deep learning techniques cope with a large number of examples, and with very good results, which is particularly important when learning to recognize images or voices. It’s an extremely efficient technology, but it doesn’t solve all the problems and doesn’t meet every need.

That is, it is not a magic wand.

No, sometimes we need non-supervised learning to generate new ideas.

In addition, you need to know exactly how the specific example relates to the program’s output. Sometimes this is not a problem, but in the case of face recognition it is. The machine will recognize either John or Aleksandra and we can’t tell it that it’s wrong just because we don’t know how it works. The program can help us and if statistically it looks good – it’s great, if not, then there’s a problem.

In other applications, on the other hand, a broader justification of the outcome is necessary. Take the example of cooperation with a banking company that wants to use artificial agent technology to simplify the process of granting credit. The machine is only used to calculate the score on the basis of which creditworthiness is established. "You have got 0.6, and you need 0.7" – that won’t work, because the customer will want to know the specific reasons why the bank does not want to give him a loan.

This resembles the Chinese scoring project for citizens. It is terrifying, because it is not clear exactly what points are awarded for, and the criteria can be many. I suspect that the system is based on deep learning.

Exactly! And this will be a problematic issue for the future of our civil society based on the idea of ​​democracy. Without clear criteria it is a real nightmare. Imagine a future in which Facebook will start closing accounts with no reason given!

This is a terrible vision!

Terrible! It is therefore important in future to ensure that justifications are given and that people are able to object. For many years to come, we will have to use learning techniques that will have to justify the results. This is what we do in the field of intelligent agents, this is the stage we are at.

Do you mean the concept of the so-called Explainable Artificial Intelligence (XAI)?

Yes. Yes, in my opinion, this will be an extremely important aspect in the future. It is not necessary for all applications of artificial intelligence, but for some it is crucial.

We have already mentioned many risks and negative scenarios. The Chinese citizens’ evaluation project is not the best example of the social usefulness of Al, but I would like to conclude by asking about the greatest threats and hopes associated with artificial intelligence.

The greatest threat results from the fact that society changes with Al. The concept of politics is also evolving. And this is not only linked to artificial agents. Democracy is an extension of a country and territory. Today, territories are digitally penetrated, which is a major handicap. As a result, actors independent of the State may emerge. This can have positive effects, but it can also be dangerous, because it weakens the State and the actors who have not been elected by society are completely independent and can do whatever they want.

Let’s take Amazon – it’s something terrible! They have a lot of power and if they want to, for example, buy a company they can exert­ appropriate pressure because they have adequate resources. This is probably a good illustration of the real threat in the future. Artificial agents can be used for a variety of purposes, sometimes nightmarish. Let’s see how China will go with with its image recognition technology. But these negative scenarios are not already set.

Everything is up to us. My optimism depends on whether we are able to understand both the positive and negative aspects of new technologies. One and the other result from the very nature of information. If people do not agree to something, everything changes. And such a change can happen very quickly. I think that is what awaits us. If people are aware of bad practices, companies will be forced to abandon them.

It can also happen, as in the case of Facebook, that the power of money and data is enormous, which makes them resistant to our dissatisfaction. Take the development of the new brand, Instagram, a medium seemingly free from the sins of its giant father.

I agree. That is why for years I have been looking for tools from political philosophy that can help us understand the new reality. A good example is the concept of supervision. Maybe you have come across it?

I think it is useful.

Especially in science. I am convinced that with the help of artificial agents new discoveries will be possible. But that doesn’t mean it’s going to happen automatically. We need a partnership between man and machine. A long time ago I used to talk with a doctor in France, both elderly and esteemed. He told me that, yes, it would be very interesting because all scientists share the same dogma, the same kind of dominant concept. And machines would be free from this and could learn without top-down assumptions or prejudices. I was very young at the time and tried to explain to him that this was a more complex issue. Machines also have dogmas, because when you give a machine an example, it gets a specific presentation.

So man transfers his own dogmas to the machine? After all, all data are human data.

Even in the presence of dogma, the machine can help to generate new theories. In the relationship between man and machine, the role of the former is not so much to invent hypotheses as to create new fields of their creation, which the latter can explore later. This is a huge breakthrough, which we owe to artificial agents. A revolution is taking place in medicine, physics and other sciences, but many people, especially in the field of big data, are behaving naively. They think that data - without a model, without a theory - is enough to generate an output. The theory is indispensable precisely because even without it we can achieve so much.


is a professor from the Sorbonne and leads one of the teams at LIP6 (a research centre dedicated to data science and artificial intelligence).

Fabien Gandon

Aleksandra Przegalińska: "The semantic network" is a difficult and foreign concept for many people. What is it? What is it for?

Fabien Gandon: When we look at the history of the network, it turns out that the first idea and the first document created by Tim Berners-Lee actually already described the semantic network. So it is not something new, as many people think. Tim’s idea was to create a very rich system of various elements, linked by various types of relations. And we find it all when we browse the web, go to pages and click links. A semantic network is a kind of Internet created for machines – access to the network is supposed to be as easy for them as for people. The rules are the same – we put data on the web, and when a machine comes across specific information, it will be able to use links to discover new data. Here is a concrete example: I put information about Warsaw on the web. I say that Warsaw is the capital, the capital of the country, and I can create links between these terms. For example, there is another capital, Paris. I can also create a link between the information about Warsaw and Paris, so when the machine finds the data about Warsaw, it can follow the link, discover the information about Paris and get new data for its database. Just like a human browsing a web page.

Why do we enable machines to build knowledge?

There are many advantages to this. When you want to create an AI or think about artificial intelligence at all, the concept of knowledge always appears. It is essential for the development of intelligence, and vice versa: intelligent data processing results in new knowledge. Therefore, the question immediately arises: where to take this knowledge and where to put the new, once it has been created? The semantic network is the answer to these questions. It is a place where machines can share knowledge, find new, and place their own. So the network becomes the same for machines as it is for us: a place for exchanging and linking information. Both we and artificial intelligence function in the same network. We are building a hybrid community. It’s funny, but in 2012, among the 50 most active editors of English-speaking Wikipedia, almost 40 were online bots. Wikipedia has become a hybrid space in which people and software agents work together to create an encyclopaedia. Bots, for example, search for spam and missing links to sources, and people supervise the quality of this work. Artificial Intelligence is also established on social networking sites such as Facebook, Twitter, Weibo in China, and many others. The data and semantics enrichment of the network affects not only the network itself, but also all related applications. The information becomes available to artificial agents which can process it. Machines, for example, can detect communities that are focused on a specific area of interest, such as agriculture. They analyse the structure of such a community and on the basis of their conclusions they find the most recognized expert. In this way, they help us find the best specialist. Such a semantic social network is created when we combine the attributes of a semantic network with the capabilities of social media, based on interaction between users and the exchange of content on a global scale.

In our laboratory we use Condor. It is a tool that enables you to view your own social network from a wider perspective: central and peripheral nodes, our communication with others, its frequency, and the subject matter. But the solutions you are talking about are more advanced. What other applications can they have?

One of the scenarios is for the community to manage itself. In each community there is a diversity of users: ordinary members, experts, group managers. Each requires different patterns of analysis and feedback. A semantic network can help to aggregate them and find connections. If, for example, you are interested in football and I am interested in tennis, then without semantics we will not discover any common interests. But if we use the semantics of the Thesaurus, which recognises that tennis and football are sports, then both of our profiles will be enriched with sport. This is an example of how social analysis can be combined with semantic analysis.

A "distributed artificial intelligence" is one that consists of many artificial agents. What does the current work on this look like?

When we are talking to each other here, you are an intelligence and I am an intelligence, and we are part of a society together. Artificial intelligences do not have to be separate entities either – monolithic, sitting alone in their corner. We have reason to believe that many different types of AI will have to interact and exchange information. Artificial intelligence communities – systems made up of numerous intelligent agents – will also be set up. After all, the network itself is decentralised. We have servers, sources, programs all over the world, and we can connect to everyone. And because we want AIs to connect to a distributed network architecture, they need to have a distributed architecture – made up of many different agents around the world – just like our servers and websites.

I have the impression that most people have a vision of AI as a robot which will come to take control of people and teach humanity a lesson. It resembles a human being, it is cold and calculating, but it understands very well. It is a kind of opposite to the human being. The concept you have put forward is very different – it is a whole population of artificial intelligences with different abilities. Which of these visions is more likely to emerge in the future? Which would benefit us more?

These concepts should not be in conflict.  A multi-agent system contains monolithic artificial intelligences. Each agent is autonomous and constitutes an artificial intelligence in this most widespread sense. Most people, however, concentrate on just one of them, not taking into account the fact that all over the world many artists are building many artificial intelligences. If they are connected to the network, they will communicate through the network. This is already happening. The bots interacting with Wikipedia are just a few of the autonomous agents that make up the intelligent system. What would be the most desirable scenario? I would say that decentralised approaches are best. For networks, as well as other IT issues, centralisation is a major threat. It gives one person the opportunity to take full control, and that is never good. There is also another risk: if one specialised artificial intelligence fails, everything collapses. Decentralisation serves the purpose of democratisation and resilience of networks, because if one of them is imperfect in a multi-agency system, another can take over its tasks and continue its work.

I have recently dealt with the issue of context awareness, which seems to be extremely important for the future of the Internet of Things. What do you think is its main purpose?

In my understanding, the awareness of context is about the creation of an artificial agent with which we interact which can adapt to our current situation. For example, if I use a mail program and it knows I’m very tired, it can change its behaviour and decide not to download e-mails from the server every minute, only every ten minutes, so as not to cause me stress. A very important aspect is the search for philanthropic applications of artificial intelligence, such as equipping devices with the ability to detect whether a person is walking or driving, so that the system does not disturb us while performing these activities. There are many interaction projects trying to make the system smarter and less distracting. There is, for example, a Waze application that makes it impossible to use your phone while driving. It is very limited because it does not do anything "smart" – does not connect to a smartwatch, does not distinguish between the driver and the passenger. But this is an example of how we are moving in the right direction.

I recently came back from Rio, where we were told that if we wanted to go anywhere, Waze was the best application, because it would show us the shortest way. But Google maps have a feature that allows them to show you the route that is not perhaps the shortest, but the safest.

This is also an excellent example of decentralisation and multi-agency networking. Google Maps is just one option. Together with Waze and Open Street maps, we explore different ways of looking at the problem, different suggestions and different channels through which users can contribute to creating and driving change. The ecosystem must not be trapped in a single centralised system and way of looking at the world.

For many people, all of the ideas we are discussing may seem quite fantastic. However, the issue of privacy remains very problematic. Are we losing it by enriching our systems? Should we be afraid of this?

We have to be careful. This applies not only to the semantic network or artificial intelligence. For me, this is a general philosophical discourse around every tool. A hammer can be used to build a house, but it can also be used to kill. Semantics can be used to improve or attack security and privacy. It is not good or bad in itself. When I worked in the USA at Carnegie Mellon University a long time ago, we worked on the e-Wallet project. At that time, we used the semantic network to determine whether anyone should have access to our data, and also to control the level of access. Thanks to semantics, you could do much more than just set up the access or not. You could make precise conditions: "my wife always has access to my location", "my boss only has access to my location at the workplace", "firefighters can always target me", "my location is public, but only at the level of the city where I am located". All of this can help protect our privacy. You should not be afraid of semantics, because it is not bad in itself. However, it must be subject to control like everything else.

Wiesław Bartkowski
The computer is the "amplifier" of my capabilities

Adam Mandziejewski: Artistic creation and creativity have always been considered one of the distinguishing areas of human activity. Meanwhile, AI is increasingly entering this field as well and is managing quite decently. What is the place of AI in your business?

Wieslaw Bartkowski: First, a small correction. AI (Artificial Intelligence) is not doing well, it is only learning to cope in this field, although it has spectacular successes in very narrow areas. Because there is no general artificial intelligence, as one commonly thinks of it. This is a big misunderstanding due to the fact that at the beginning of the research on AI they searched for just such a general intelligence, matching human. Since then it has changed a lot; the focus has been on solving very narrowly defined tasks, such as recognizing what is in a picture or what a camera sees. The narrow area called computer vision was supposed to give the computers eyes, because they were completely blind despite the stream of data from the cameras. In this field, we have had spectacular successes in the last few years thanks to machine learning (one of the methods of artificial intelligence). I don’t mean finding images of cats (laughter), but face recognition, and even the expression of emotions on this face, and to keep it up to date in the video image and not in a static photo. Another example is the recognition of cancer in X-ray images, but people still do this better than machines. The list is long. The development of machine vision allowed computers to do new things, such as driving cars, but also cleverly process the images, change their features, and even turn the pictures into reproductions of images pretending to be the style of a well-known painter.

In my opinion, however, the most important thing that has happened is to change the approach to how you program your computer. This gives great scope for artistic exploration. Until now, computers were programmed by defining exactly what they should do in every possible situation. Every reaction had to be programmed by the coder, even a random reaction. This approach obviously offers great capabilities, and I personally like it very much. Imperative programming in particular. It’s very pleasant. I create sequences of commands and the computer executes them. However, this doesn’t mean that everything can be predicted. Even very short programs, having several lines of text with instructions of the program, after one instruction in each line, can still surprise us with the complexity of the produced effect. I love this kind of case where the computer surprises me, because I expected a different effect, and this unintentional one is even better! Nevertheless, the entire mechanism of creating this effect is "manually" constructed, as if we were constructing a huge machine with cogs. I laugh that I am teaching the (probably soon) vanishing craftsmanship of "manual" computer programming at Creative Coding.

Machine learning has changed this. The programmer doesn’t determine every reaction, because it is possible to create a general architecture of the learning system, which is not programmed, but learns from examples. What’s more, there are methods of non-supervised learning already available, so the computer can learn from its mistakes on its own! Just like we do, we learn. But for now, this is still only optimisation. Even if a computer wins at GO, it doesn’t mean it thinks, but it optimizes based on the huge amount of data it has received from us or on the rules of the game, such as chess, it has unintentionally generated itself by wandering around, exploring the space for possible movements. As a result, it reduces huge datasets to reach the optimal decisions to be made in the current situation. In other words, it’s great at solving puzzles, but "in life" it’s still completely clumsy. This requires emotions that no one can implement using artificial intelligence. Yes, emotions! The machine has no emotions. They are essential for making rational decisions. Not for solving logical puzzles, but to make "living" decisions. But this is for a separate interview. I would like to recommend the book Descartes’ Error by Antonio Damasio to those who are interested.

Coming back to AI and my business. AI is based on artificial neural networks, mathematical models of the neuron. Modelling in a very simplified way, sometimes even "caricatured action" of the biological neuron is spoken of. It’s unusual that, when simplified to such an extent, a network of interconnected artificial neurons has such complex features as beating a human being at GO.

16 years ago, when I started working on my doctoral thesis in psychology under the direction of Professor Andrzej Nowak – who introduced a pioneering approach to psychology by creating models of psychological and social processes and then by computer simulation of them – we were simulating a stream of consciousness or social impact mediated by technology. The I skills gained at MIM UW during my IT studies were useful. In the first approach to my doctorate, we tried to introduce emotions in artificial neural networks, and, using such "emotional" networks, I wanted to search for optimal solutions to the most difficult problems in computer science, the so-called NP difficult problems. Ha ha, old times. Unfortunately, not much of it came of it, but we also taught students at Connecticut College how artificial networks work, showing them, among other things, the pandemonium model. I remember that we made a small educational application explaining the operation of such a model visually. Now it looks a bit retro.

Pandemonium is a theoretical model describing the recognition of objects in the process of visual perception. For this it uses the metaphor of the mind as a collection of demons. Demons "call" each other according to the internal hierarchy of layers. Successive layers are respectively data demons, traits, cognitive demons and finally a decision demon. The model can be considered as a protoplast of today’s artificial deep learning networks. The pandemonium model dates back to the pioneering times of Artificial Intelligence research in the 1950s and is now recognised as a classic work of AI research. It was published in 1959 by Oliver Gordon Selfridge, called the "father of machine vision", in the article Pandemonium: A paradigm for learning.

The pandemonium model at that time was completely ignored by the AI research community as it was radically different from the logic and symbolic operations approach promoted at that time. Today, observing the enormous success of deep learning, we can say that Selfridge was ahead of his time.

That’s why I want to show this model in "Przemiany", but in a new version, using material that combines code, electronics and digital fabrication. It allows you to create tangible, embodied experiences that involve the viewer much more than the ubiquitous screens. I repeat after my guru Hiroshi Ishii (Tangible Media Group – MIT) that I mix bits and atoms. That is what I teach my students.

Returning to AI and the computer. For me a computer has long been an "amplifier" of my capabilities, I can do more with it than without it, e.g. when writing a computer simulation of a phenomenon which it’s difficult for me to understand. If, in the future, this computer is an artificial intelligence and we don’t approach it full of fears, but open to cooperation, then we’ll be able to achieve much more.

Here, I’m reminded of a story that’s important to me. I love how the result of my program surprises me. I am often even more surprised by what happens to these effects. I was amazed when, thanks to Łukasz Ronduda, my simulations found their way to an exhibition of contemporary art. I’m delighted that now my students are experiencing it, completely surprised by the fact that their graduation works will become a set for an artificial intelligence performance.

Returning to AI. I have to admit that after my failure with NP problems I lost interest in artificial neural networks, but now this area is becoming exciting again due to, among other things, the successes of deep learning. This topic has become important for artists as well. This was clearly visible during Ars Electronica 2017, the leitmotif of which was "AI – THE OTHER I". It was interesting to look at technology as a space for projecting our desires and fears. And asking the question, what will make us different from thinking machines? I personally like to think about the development of AI as a journey to better understanding ourselves and at the same time to becoming humble, because maybe we are not as special as we think.

I’m glad that the Przemiany Festival is returning to the subject from another interesting perspective, speaking of a civilization of algorithms that shapes our world, not excluding culture, where we are giving away more and more power. I personally wonder about what Robert Ebstein said: "If people trust machines, it will be the end of democracy”.

But leaving aside the fear and following the fascination, the festival will feature a media lab where we will experiment with soft robots. Robots that do not have engines but artificial "muscles". Some scientists believe that the combination of soft, flexible robots and artificial intelligence will be a huge breakthrough in robotics and will allow robots to enter public spaces safely, for example.

We will look at it from a different perspective than we usually do at Creative Coding. We will explore the creative potential of this combination. And note – this is not a workshop for engineers, although we invite them as well. You don’t have to know how to code, know electronics or digital fabrication. In addition to the robots, participants will experience the approach we take to our studies, which is to teach hard skills in a soft way. This time it will be soft robots. We particularly encourage the participation of artists, designers, architects and everyone who thinks they do not have a strict mind, and this is probably not for them.

And with regard to the part of the question about artistic creation and creativity. Looking at the oil paintings of the robots created by Hod Lipson recently, I came to the conclusion that it is not the question of creativity that is important, but who is the artist?

Is Artificial Intelligence a partner or a rival in the field of creation?

The perception of artificial intelligence in opposition to us has disappeared, but I repeat once more, perhaps it is worthwhile to look at it in the context of cooperation. Gary Kasparov said that a good person with a machine is the best combination.

My masters taught me at their IT studios that a computer would not soon beat a human at GO. GO is unique because it’s a specific combination of science, art and sport. It was said that this game requires human intuition. And today everyone is in great shock that the AI methods have led to victory over Man so quickly. Do computers have intuition? Definitely not! As I said, they are only optimising. And yet Lee Sedol said, after losing against the computer, that it had changed his way of thinking about GO. The computer didn’t playing greedily. It was enough for it to win, the difference in points could be small. Lee also said that he was most surprised that the computer showed that movements that he thought were creative were in fact conventional. The computer showed what creativity can be like in this game. 37. The movement made by AlphaGo went down in history, according to the masters of this extraordinary game, as beautiful and creative.  But these are just games, puzzles. Nevertheless, it is already clear that there is some interesting creative potential in AI’s approach. Undoubtedly, an extraordinary material for an artist, it is worth exploring its properties.

Finally, returning to the increase in human potential through machines. The interesting question is, if the machines have empathy and compassion, will they help us to become better people? I am an optimist (laugh).

So you see AI as another tool in the hands of the artist. What new perspectives does this open up for art?

The cooperation of man with a creative machine will open up new forms of creativity for us.  I was very inspired by Yamaha’s experiment, in which a dancer became a pianist thanks to the application of machine learning.  I would like to stress that this is cooperation, not rivalry. Krzysztof Garbaczewski put it correctly in a discussion in which we recently participated. I quote: "the creativity of artificial intelligence can only exist in cooperation with people. It may be interesting, but it seems to me that if the works were made by artificial intelligence only, they would be made for artificial intelligence only”. I would like to add here that artificial intelligence will never be human intelligence because it will not have a human body. To explain this, I would have to go into the theory of embodied cognition, but this is a topic for a separate interview (laughter).

I feel that you would like to see a machine learning course at the academies…

The barrier of thinking "it’s too difficult, I don’t understand it, I’m cutting myself off" has to be overcome. Humanists must be involved in shaping the world around us, which is increasingly influenced by technology. They cannot ignore a field, saying they don’t understand it. What I have been trying to do for more than 25 years is to teach artists, designers and everyone who says they have an artistic mind and that it’s not for them how to program! It’s not true. And this is an extraordinary thing that’s happening. Once the "humanist" has learned to program, he or she starts to think differently about the world of technology. He senses causality. He feels he can become involved in the process of creating and shaping this world.  This is one of the reasons why I was convinced by Krzysztof Golinski from panaGenerator to create the Creative Coding studios.

I promote a holistic approach. Everyone should know a little about everything, developing what they can do best. For me this is programming – a dying craftsmanship (laughter). This helps to establish a dialogue. One more thing. We lack harmonious development. At universities, too much emphasis is put on the mind, too little is done with the hands. It would seem that since at Creative Coding we teach programming, students are sitting in front of computers for much of the time. That is not true, they spend maybe 15% of their time in front of them. Facing the material world is much more difficult than the challenges posed by the virtual world. Students actually encode, but on the other hand they make an electronic circuit that controls an object. Apart from the mental work, they have to solder something, entangle some cables, create the mechanics of an object, cooperate with digitally controlled machines.

To sum up, we can say that Media Lab culture can be such a tool. It is a form of interdisciplinary activity that enables people with different skills to work and learn together, using media and technology, but also inspiration from learning. The basis is the experiment, the experience of constructing artefacts and the observation of their influence. Exchange of experiences, openness, but not only to the group. Openly sharing effects with the world under open licence. An example from our backyard – everything we do during classes with Creative Coding is available on github.

Can art explain science and new technologies?

Much more. It’s not a passive narrator. It contributes to the development of science and sheds new light on the understanding of what technology is. It give it a hook. It changes perspective. I like to tell my students that we teach them not to be afraid of doing unnecessary things. Let’s leave the solution to the problems to the engineers. And what happens? As these engineers look into our Creative Coding lab, they say, "This is where real innovation comes in. I laugh that we do not know what the word means. They are surprised at how their technologies can be used. This develops both sides. The same is true in the social context. Art sheds new light on the technology we use on a daily basis.

Initiatives combining art with science and technology are extremely valuable. Such as the European Digital Art and Science Network. I was very impressed by the work of CellF created by Guy Ben-Ary and a team of scientists. It is a synthesizer controlled by a network of real, vibrant neurons. I saw how it improvised live with musicians.

The creative interdependence of science, new technologies and artistic experimentation can change how we imagine and practise art. Will everyone become a creator? This was not the case with the Internet, but with machine learning and in the future with other AI techniques, it may happen. I will quote Krzysztof Garbaczewski again: "Today everyone is an artist, because everyone has Instagram. The development of artificial intelligence may result in the artists being thrown out of business”. Well, that’s it. Because our brain works in such a way that it’s better at recognizong patterns/templates than creating new ones. But in cooperation with the algorithm we get super power. A computer can generate many variants, and we choose what suits us. That’s how remixes are made, using machine learning filters, and that’s what’s going to happen more and more. Undoubtedly, the further development of AI will have an even greater impact on culture and contact with culture than smartphones have done today.


Cywilizacja Algorytmów/ The Civilization of Algorithms

Aleksandra Przegalińska

Biuro Tłumaczeń Narrator


Natalia Krasicka
Katarzyna Nowicka