One persons utopia is anothers dystopia

Yoshie Itasaka

Journey through the Land of Israel-Palestine

"Every constructed reality is a myth, and the more people carry it on, the more real it becomes. Until you manage to look behind the surface."

All current problems stem from past history. The recognition of facts may be disorderly due to differences in the historical perceptions of each country, historical revisions during each era, conflicts within each era and the communication of personal memories [histories] by immigrants [personal experiences] during war. Differences in terms of perceptions of morality may occur according to differences in religious and ethnic views. We must constantly recognise each change in era. [shifts in the beliefs of the country due to government changes, changes in world affairs]. There are still conflicting world views, beliefs and religions in this world.

-Is it possible to have a utopia in which everyone’s ideals are realised?-

Our generation will have to rethink the true history and move on from the past.

Yoshie Itasaka, born in Osaka in 1984, started the nomadic journey in 2010. From 2010-2013 she traveled North America, from 2013-2018 around Europe [including the former Soviet countries, former Yugoslavia], Russia, Israel.

Language: English

An intelligent take on global lifestyle, arts and culture

  • Insightful reads
  • Interviews & reviews
  • The FT Crossword
  • Travel, houses, entertainment & style

Choose your subscription

Learn more and compare subscriptions

Full Terms and Conditions apply to all Subscriptions.

Check if your university has an FT membership to read for free.

Check my access

>Thomas More’s Utopia is arguably the first science fiction ever written, because the piece is effectively people living in a perfect world. Plato’s Republic doesn’t count because it’s a treatise on how to create Plato’s vision of a perfect world.

The fact that I find both these perfect worlds horrifying is actually not that uncommon: no matter who builds a perfect world, we imperfect humans will always go and mess it up. On top of that, my idea of perfection is not going to be someone else’s – quite possibly anyone else’s.

So naturally, any kind of utopia is actually going to read as a dystopia to most people.

On the flip side, dystopias have a long history in literary, sometimes as moral lessons of the ‘if this continues’ type, satires, and of course the mess the heroes have to try to fix.

Okay, this ramble does actually have a point. Just let me find it again. Oh. Yes. Literary utopias are usually a resounding failure because either they’re assuming humans… aren’t, or because they just can’t work with human nature. Or – more usually – both. [Funnily enough this applies to political utopias too. Whodathunkit?]

How often have you read about the idyllic lifestyle of this or that people and thought there was no way that kind of society could work because people just aren’t like that? If you’re anything like me, it’s more than a few – but if we sat down and compared notes, there’s a good chance that you’d like some of the ones I hate, and vice versa.

So, let’s take a look at what makes a Utopia/Dystopia.

Start with the theoretical one size fits all, which in practice quickly turns into one size doesn’t fit anyone very well and doesn’t fit some people at all. If you’re one of the ones that it more or less fits all right, you can probably live with the place, but if it fits too badly or doesn’t fit at all the place quickly becomes sheer hell [this, incidentally, is one of the reasons I loathe pantyhose. I’m in the ‘sheer hell’ group]. Not even clones will flourish in a monoculture – there will be enough experiential difference to ensure that some are misfits.

Now add human nature, which isn’t changing any time soon. We’re social critters, with all that implies: namely hierarchical and with deeply ingrained enforcement behaviors. Whether they’re instinctive or not isn’t relevant, because they’re so strongly wired that they might as well be instinctive. No matter how you raise your kids, how carefully you keep them from the whole notion of competition and leaders and such, within seconds of meeting a new person they’re sizing that person up and figuring out where they ‘fit’.

When you watch someone – anyone – meeting a new person, a lot of what goes on in the first few seconds is working out whether that person is above, equal, or below in the social hierarchy. If the two have different ideas of who belongs where, there’ll be quite a bit of work on the part of whoever thinks they should be superior [i.e. both of them] to establish superiority in a way the other recognizes. The winner will stand a little taller, the loser slump a bit – and as often as not neither one is consciously aware of what just happened.

That covers the leaders and followers [shepherds and sheep]. There’s two other very general ‘types’ you’ll find in any large enough group. They’re rarer than shepherds and sheep [this is why you need a larger group], and both of them will get attacked by shepherds and sheep if they reveal themselves. Predators [wolves] are kind of obvious. They pretend to be sheep so they can profit from other people. And no, this does not mean profit per se is evil. Using other people is – if you get all the benefit and don’t pay any of the costs, you’re a predator. If it’s a crime with a victim [and yes, this includes running a company into the ground to get everything you can from it then walking away and leaving a bankrupt company and thousands of ruined lives], it’s a predator thing. So is mooching off someone all your life – although that could be argued as a more parasitic behavior.

Then there are the goats – the independent-minded explorers and questioners who won’t go with the herd unless they’ve decided independently the herd is the right place for them. No-one likes them: they often identify predators first, they make the sheep uncomfortable, and they question the shepherds. This is why you get a lot of misfits in certain areas – they identify each other and band together for their own protection. [As a side note: the USA is a nation settled and founded by goats. Over 200 years after the war of independence, that still shows – and is why the other ‘colony’ nations have the most in common with the USA. It’s also why a heck of a lot of the inventions that materially

changed people’s lives originate in the former colonies. Penicillin: USA and Australia, cars: USA, planes: USA, drought-resistant damn near anything, Australia [Yes, I’ve left a lot off. It’s not meant to be exhaustive]].

There is no such thing as a society where even all the sheep are happy. The best thing you as an author can do is build something that mostly takes into account human nature and has a whole bunch of incentives that tend to guide human nature towards improving things for their fellow-humans rather than destruction or power games [trying this in real life tends to end badly. We’re perverse critters].

Some of the best fictional societies are – of course – Pratchett’s, which take all of the variety and perversity of human nature into account. Sarah covers some interesting ground with Eden and Earth in DarkShip Thieves, and what Dave does to all the cultural myths about who is superior, the noble/innocent savage, the simple religious folk, the – oh, the heck with it, what Dave does to practically every cultural myth ever in Slow Train to Arcturus has to be read to be believed [if you haven’t already read it, go and buy it. It’s worth it.]

Who else has good fictional societies? Conversely, whose alleged utopias horrify you?

[This is a talk I gave at FSCONS 2013, on November 9]

I've been studying, as well as writing and talking about the dystopian side of technological progress for so long that eventually I got tired of it. So, instead of focusing on critical issues, I began to do research on what might be called the ”thought forms” involved in utopian/dystopian or positive/negative debates, particularly regarding digital technologies.The objective developments are clear enough, and often not very reassuring, but they are at least talked about and debated. The subjective, really personal side of it all, however, seems to me to have been less clearly articulated or even brought to the fore forcefully enough. So, what might happen if we, in this largely technical context, begin to really focus on ourselves, as human beings? This has come to occupy my thinking more and more, and hence the title of this talk.

Let's start with two examples of what might soon become objective changes:

1. Recently the OECD issued a document called PISA 2015: Draft Collaborative Problem Solving Framework [PISA = Programme for International Student Assessment]. PISA has come to the conclusion that collaboration between humans might be a good thing – but how do you assess that? It's difficult to control the parameters involved in an actual collaborative situation sufficiently, in order to be able to assess it in a way that makes the results comparable across different years and countries. Therefore, it has "been decided to place each individual student in collaborative problem solving situations, where the team member[s] with whom the student has to collaborate is fully controlled. This is achieved by programming computer agents."

The document is quite explicit as to the reason for this artificiality in the assessment of [human!] cooperation: "When humans collaborate together, it often takes considerable time for making introductions, discussing task properties, and assigning roles during the initial phases of CPS activities [...] and also for monitoring and checking up on team members during action phases” [CPS = Collaborative Problem Solving]I can't help getting the feeling that this very document must have been authored by a programmed artificial agent.

2. In September this year two researchers at Oxford University published a paper called The Future of Employment: How Susceptible Are Jobs to Computerisation? In their estimate ”about 47 percent of total US employment is at risk”. "Our model predicts that most workers in transportation and logistics occupations, together with the bulk of office and administrative support workers, and labour in production occupations, are at risk. […] a substantial share of employment in service occupations […] are highly susceptible to computerisation." Furthermore, computerisation will principally be "confined to low-skill and low-wage occupations. […] as technology races ahead, low-skill workers will reallocate to tasks that are non-susceptible to computerisation – i.e., tasks requiring creative and social intelligence. For workers to win the race, they will have to acquire creative and social skills." [my emphasis]

So, here we have, on the one hand, a scenario in which the cooperative capabilities of students are to be assessed by means of cooperation with computer programs, because ordinary human cooperation is too messy to assess in a standardised fashion. And, on the other hand, we have a scenario in which the only jobs left to humans are the ones demanding creative and social skills. To me this indicates two different trajectories along which what might be called ubiquitous computerisation are heading.

One trajectory means that human beings are measured and assessed according to standards that are amenable to machine learning and machine intelligence. The tendency here is to disregard all the messy, all-too-human traits of which we, at heart, are so fond –- in a friend, in a colleague, in a lover. The possible consequences of this trajectory seem to be clearly dystopian –- speaking now as a human being. But from the perspective of the OECD bureaucracy, it is equally clearly utopian. It gets us closer to that ideal state of affairs in which students, as well as the educational systems of different countries, can be compared quite mechanically and efficiently. It will certainly be cheaper than any all-too-human -- and because of that incomparable -- kinds of assessment. And why trust humans at all in this matter?

The other trajectory, in which the only jobs left to human beings are those requiring creative and social skills, is less clearly dystopian. It could be argued that it might be a good thing. Think of all the more or less boring or routine jobs that won't bore anyone anymore, because machines never get bored. On the other hand, we could note the wording ”for workers to win the race...”. There is a race going on between humans and machines, for jobs.

Now, speaking of trajectories, or trends, the question arises: How relentless, how deterministic, are these developments? If you ask Ray Kurzweil, Google's Director of Engineering, he will speak of The Singularity, a state of affairs in which the capabilities of technology far outstrip the capabilities of technologically non-augmented humans. We won't be able even to enter the race without more or less merging with our technological creations, which will start to evolve independently of us, if they haven't already begun to do so. The latter view is espoused by Kevin Kelly who, in his book What Technology Wants, calls the emerging results of this allegedly autonomous technological evolution The Technium.

I think there are solid reasons to think that Kurzweil's Singularity scenario, based on an exponential growth of machine intelligence, as well as Kelly's evolutionarily deterministic Technium, are really nothing more than fairy tales.

I don't mean that in a disparaging way. The Singularity and The Technium are mythological conceptions, and this is important, but they are emphatically not science. The Singularity, in particular, is now almost becoming a household word among many educated people. It catches on because it ties in so well with the kind of science fiction futures we've been promised for well over a hundred years by now. And now, at last, science and technology seem to be catching up with fiction.

One of the clear signs that The Singularity is a myth is that it is very inspiring. That's what real myths are for. And the thing with inspiration is that it can be exhilarating as well as terrifying –- and sometimes it's the terror that exhilarates. These mythically induced emotions, exhilaration and terror, constitute the life breath of utopias and dystopias, which is to say that they're not very intellectual, or even –- I fear –- very intelligent. Intelligence is a tricky concept, and I think it means rather different things for human beings and machines.I also think that we are subject here to a kind of collective illusion which says that rationality, logical arguments, efficient [and preferably cheap] procedures are the essence or the epitome of ”intelligence”. And I think that this is a very serious mistake, a very dangerous illusion indeed. But it can be difficult to really grasp this, unless one manages to get very much closer to our everyday lives when thinking about it.But let us stay with the big picture a little while longer. Recently I've noticed that many more people, who are professional technologists in the digital industries, are becoming critical towards some of the possibilities inherent in the increasing technologisation of society. I recall, for example, a long conversation I had with an information security consultant, who also had ties to Swedish intelligence agencies. His view of the future, privately, was extremely dystopian. I almost felt like I was talking to the Unabomber himself.

Ted Kaczynski, a.k.a. the Unabomber, is a mathematician who became so enraged by the impact of modern technology that he decided to kill to make his point, and he targeted individual technologists. He is also very smart, very articulate. You can study his really stark, really dystopian vision in his collected writings, Technological Slavery. [If you're a dystopian yourself it should be a real treat.] His vision of a future in which the technological society is not destroyed – despite his sincere wishes – goes like this:

"Suppose the system survives the crisis of the next several decades. By that time it will have to have solved, or at least brought under control, the principal problems that confront it, in particular that of "socializing" human beings; that is, making people sufficiently docile so that their behavior no longer threatens the system. That being accomplished, it does not appear that there would be any further obstacle to the development of technology, and it would presumably advance toward its logical conclusion, which is complete control of everything on Earth, including human beings and all other important organisms."This is exactly the same vision as in Kurzweil's Singularity, only with a resounding minus where Kurzweil sees mostly a plus.

Now, to conclude, let us go back to my initial example, the OECD's ”Collaborative Problem Solving Framework”. Here a thoroughly human context is invaded by technology for one reason only: it makes something qualitative and messy ostensibly measurable. Never mind that the overall context becomes so different that it is really unclear what actually is measured. This I see as an example of measurement mania.

My other intital example concerned the replacement of humans by machines in the job market. Here, too, there is a kind of mania at work. There are many jobs that I, for one, think both can and should be done by machines -- but where does this vision, no, this real trend of wholesale replacement, based on procedural efficiency and relative cost more than anything else, come from?

I think both the measurement mania and the cost and efficiency mania come from the collective illusion I mentioned earlier –- the view that rationality, logical arguments, efficient procedures are the essence, the epitome, of intelligence and civilization.

Unfortunately this is an illusion that by now is thoroughly manifested in all kinds of structures, institutions, routines. So it won't go away just because some people start to wake up and see it for what it is. But that doesn't mean that its influence is inevitable and deterministic. It only means that it is strong. And, fortunately, there are forces working incessantly to undermine it.

Yesterday, as part of my job as a consultant, I talked with some junior high school teachers about the impact of the new mandatory curriculum for all Swedish elementary schools, called Lgr 11. An overreaching demand in Lgr 11 is so-called entrepreneurial learning, which emphasises abilities such as creativity, curiosity, self-confidence, the will and ability to try out your own ideas, etc. This clearly goes against the grain of traditional industrial school curricula and thus against the grain of the very industrial society that has fostered the illusory mania of narrow rationality. It was quite inspiring to hear those teachers speak of the sometimes amazing changes pupils went through when they suddenly realised that their own interests, experiences, and ideas mattered.

What really stuck in my mind was a question put by one school's headmaster, a question that was evidently of some concern: ”But how do we measure the progress we have noticed with entrepreneurial learning?” It was clear from what she and the teachers said, that the positive learning changes they had noticed could really only be noticed by themselves, on the basis of their personal knowledge and experience. It was, in other words, based on mature human judgment. Which is as it should be. You actually don't have to ”measure” in order to know. It may be quite enough to trust the judgment of experienced practitioners. But technologised bureaucracies are, by default, really uncomfortable with this. And clearly this kind of [all-too?]human judgment messes things up for the PISA mentality.

Kids in these schools will have a hard time accepting unnecessary formal and mechanical structures as they grow older. They don't know yet what they're really up against, but nothing in a life worth living is easy, is it?

As long as there are human beings who can stand up and say – with the actor Patrick McGoohan in the 60s TV series The Prisoner – ”I am not a number. I am a free man” I, for one, refuse to believe in neither utopias nor dystopias.

/Per

Video liên quan

Bài Viết Liên Quan

Chủ Đề