The second self, ASI and elastic time

The second self, ASI and elastic time


Over the years, I’ve been intrigued by cyborg anthropology in general. This really means I’ve been interested in how humanity has been interacting with the technology that we are currently inundated with and encumbered by—both physically and mentally.


There are a couple of concepts that seem to be on a journey towards each other: the second self and, by association, elastic time. But as we barrel ever faster towards ASI or something closely resembling AGI, we might discover something else: technological precognition.


Before we get on to that though, there are a couple of things to unpack before we think about what's coming on the horizon. The second self is an easy way to refer to your online presence, what you choose to show to the world and what the world chooses to show you via this layer of crafted demiconsciousness.


Elastic time refers to how your second self interacts with different things at different times but realises itself when its biological host interacts with it through the menagerie of apps and websites. You can experience this when you check a WhatsApp group message after a weekend away in a place that didn’t have a signal. You are biologically downloading information that is stored by your digital side.


While the second self is isolated from its biological master, the interaction it has is akin to a passive sponge, taking in all the information and remembering everything in an instant archive. With the advent of group chats on apps like Whatsapp and Discord, there are multiple second selves passively interacting with each other consistently on your devices.


In elastic timelines of our connected universes, the idea of time itself starts meaning different things to perception when viewed subjectively creating elastic snapshots of meaning. This generates mutual understanding and expression metaphorically within the internet, but literally within a collective mind.


Artificial super intelligence aka ASI. is specifically aimed AI at targeted issues to solve. The current dream is to be able to create an ASI that can finish science, which sounds wacky to even consider, but super smart people across the world are taking this aim seriously. In my point however, I wanted to look at what affect a targeted ASI could do to your second self, naturally linking our second selves to eachother in these joint spaces.


ASI in this case would be able to sift through these elastic timelines in our absence, creating and maintaining a black box-esque conversation between your second self and other entities contextually interacting with it, you and the duality of the two. 


When a digital clone of yourself, within the context of a closed environment, has the freedom to interact with others of it’s kind; in a different version of the same timeline. The results of these interactions are likely to be increasingly deeper than any one online interaction. It will sooner come across as telepathy.


This idea of technotelepathy isn’t exactly accurate though, it would be closer to technological precognition, which for the time being I’ll call TPC. In the context of the group chat, chats would happen before their biological counterparts have them. What then, would a normal conversation be worth?


TPC doesn’t exist without the biological conversation. The conversation would then be it’s own prompt, which without updates would circle the same information. Not to say it wouldn’t be interesting or have it’s own merit, but it would have input in the current conversation. Instead the second selves would monitor the chat, with each input would have the finite amount of conversations that could spin off from it. 


One of these finite options would be the correct way the conversation went, but the potential conversation that could have been real, has already taken place, all be it in a simulated realm. The longer this happens the more predictable the biological counterparts would become. If all second selves are in agreement with where the conversation is going it can interject speeding up meaning and understanding within the group.


The digital hivemind would get things done in a fraction of the time, while the biologicals would have veto powers over decisions made on their behalves. Though, as time progresses, the need for the veto would get less necessary. The hive mind would become, in essence, the conversation and the biologicals would just be their to download information.


This notion, however dystopian or utopian you might see it as, is yet to be named. I was spitballing some ideas around the other day. So, this is what I’ve come up with: TPC/cog, Bubble, projection, double, holo, DS (digital self), perception, counterpart, update, monitor, constant, mind, sift, absence, duality, or clone. My current favourite is between cog, absence, constant, and counterpart. Though I feel counterpart is the most accurate at the moment.


If something like this does happen, and it is likely to, then the people of these chats will live on longer than their biologicals. Leaving snippets of a communal soul, to be absorbed by an advance form of AI down the line. Is this a step to a version of digital immortality? Or will it just be an echo of humanity in the memory of the next step of human evolution?


On cyborg anthropology, citizen journalism and the universal basic income in relation to AI automation.

October 14, 2024

personal, cyborg anthropology

This is just the tip of the iceberg.

“Maybe the only significant difference between a really smart simulation and a human being was the noise they made when you punched them.” 
― Terry PratchettThe Long Earth

 

Wait! Am I a cyborg?

Cyborg anthropology is the relationship between humans and technologie; more specifically - computers. This subject is as much of an in-depth look at human beings as a species, as it is a study into the past and its prophetic insights of the future.

Rather than thinking of a cyborg as a physically augmented human, use the metaphor of cyberspace to augment the human mind. Building on that, it is also a lens to spy on the horizon and what is just beyond it.

The idea of cyborg anthropology is quite a sweeping one and can encompass a variety of subjects. However, in this case, the current lens is aimed at the ubiquitous availability of information we are currently immersed in as a species. 
 

Tech is cheap but so is life

This is actually more to do with the ubiquity of cheap technology that is finding itself in the poorest parts of the world. Technological stepping stones are allowing developing communities the abilities of hope for a better word and more advanced infrastructure, without the need for the big payouts. The invention of the telephone that spread through the developed world put a domino effect on technology. This was necessary for what would later create a strong foundation for the smart phone.

A satellite launched from the Russian cosmodrome in southern Kazahkstan, using parts developed by NASA that were made with the help of COMAC (Commercial Aircraft Corporation of China), is allowing a young business woman in Kenya (Kenya is ranked 6th on the extreme poverty index) to support her family, using micro finance that she controls on her smart phone. 

Though there isn't currently a strong backing for independent regulation in most of the affected areas, it is still a stride in the direction out of poverty for some of the worst affected people on the planet.
 

Reports just in: Cyborg rights

Citizen journalism has been a natural progression that rose from an interconnected globalised world, using the combined power houses of cheap tech and social media. The discussion of its validity in objective journalism is as complex as the stories it covers, so side stepping that  wriggling can of worms (this warrants it's own post), it has contributed to some earth shattering revelations.

The Arab revolutions, popularised by the term Arab spring, that started on the 17th December 2010 with the Tunisian revolution, energetically spilling over to 5 other countries and a further 15 countries in the region with fragmented enthusiasm. Even though we stand in the shadow of what has been a colossal failure, leading many to jump on the band wagon of calling the current climate in the region the Arab winter coined by Prof James Simms Jr.

The effect of citizen journalism on this particular occasion was staggering, not only because of the speed, depth or scale it travelled at, but because of the content. It was people that vindicated themselves through technology and social media. For the first time it was obvious how much power the internet had when used in this way, so much so that within the time frame of the revolutions, Egypt, Libya and Syria shut down the internet within their borders.

While Tunisia, Saudi Arabia and Bahrain breached confidentiality laws and hacked into accounts, arrested and allegedly killed some agitators. This rehashed the interesting discussion whether or not the internet should be deemed as a human right, the conversation started in 2003's world summit on the information society.

This was resolved in 2016 by the UN human rights council as a non-binding resolution as a condemnation of withholding access to the internet by governments. This means if any government does intentionally disrupt it's peoples access to the internet, the UN will cross it's arms and pout, because technically they haven't broken any law.

 

Stepping stones

If we think of the upcoming technologies that approach us, at an ever growing pace, we can start to think of the impact it will begin to have globally. The main piece of the future technology we have hold of, tentatively, by the throttle is AI. Tentatively because AI developers have already got to the point where they don't fully understand the reasoning behind the decisions made by the AI programs. The way they have structured AI mechanisms is through a neural network logic gate.

The same way a baby has potential neural networks that make connections through repetitive learning, the AI has the same metaphorically speaking, so any reasoning done is inside its virtual mind. The Berkeley University of California, amongst others, have created a process in which they go through the machines readable reasons for doing what ever it did in any one scenario and analyses it.

The long strings of code is then deciphered into a legible sentences, allowing for future algorithms to produce an interface to ask questions and decipher answers. Up until the AI learns how to lie of course.

The point still stands that we are on the verge of a paradigm shifting technology which will have far reaching consequences in the coming years. Moore's law has stayed relatively on point since it's inception in 1965 by Gordan Moore co-founder of Intel, describing a doubling of power every 18 months, creating the exponential curved graph you may or may not have come across.

This translates quite nicely into sales projections over the same period of time, (which is quite obviously a glitch in the matrix. WE ARE IN A SIMULATION!). The higher the curve the cheaper the tech becomes on the lower part of the curve. When AI and AI integrated tools becomes as readily available as the xolo 900 (yes that is regrettably pronounced “yolo”) Intel's first low cost smartphone, the need for human input for a lot of industries are going to be severely cut.

Even in the poorer countries where rampant free market capitalism exploits people in need for cheap labour. When AI automation is more economical than using cheap labour abroad and the factories of where the cheap labour was sourced from start upgrading to AI automated machines, what will happen to the global citizen?

 

The rise of the automaton.

So when exactly are we going to be sidelined by the machines? Is there a date we can collectively put in our calenders as the apocalypse? The short answer is no. The somewhat less short answer is we'll have to get back to you. Amongst the wild speculators the Guardian newspaper suggested 6% of all jobs will be automated by 2021, while the metro chimed in that it would be 40% by the dawn of 2030.

One of many studies floating around is the future of humanity institute's survey (yes apparently that is a thing that is real) published on arxiv (which you can, if you so wish to have your mind melted, read the article here). The survey consisted of 352 machine learning researchers across the world. They have predicted a 50% chance the time it will take for machine learning, to out perform humans in all tasks, will be in about 45 years. That's the same chances of running for the bus and catching it in London.

The finale won't come until 120 years according to the north American researchers, where all jobs will be man handled/gender neutral handled/...bot handled by complete high-level machine intelligence (HLMI) automation. Though this eventuality is estimated to be 44 years earlier by the Asian researchers in the survey. This study was published on the 24th March 2017 so put a line between 2093 and 2137 for the potential apocalypse and/or Utopia.

Ironically the survey had the researchers ask them about their own redundancy, which seems somewhat mean spirited but it had to be asked, when would machine learning researchers be automated. 88 years is apparently the answer, however if you were asked that question, it would be justifiable if you fudge the numbers a bit to give you an extra decade worth of funding.

This is akin to the scene in the Avengers first movie, where Ironman push starts the turbine to get the engine up and running again. The half second he gets to exclaim he may be in a bit of a pickle, before having the weight of humanities unemployment smack him in the back, leaving him winded and somewhat broken inside...wait I'm mixing metaphors. Point still stands that passing the torch is going to be difficult and in more places than I care to think about is going to be exceedingly painful.

Within the next decade quantum computing is pegged to obtain quantum supremacy, which is when the ability of a classical/quantum computer trumps the strongest of classical super computers. When it does it will be able to boost normal computers significantly through connected cloud computing. Using quantum; simulation, assisted optimization and sampling, machine learning will get a shunt into the future(read in more depth in this article here from Nature: the international weekly journal of science).

Also within that period there will be a strike out list of jobs that would have been automated, it starts out relatively mundane, but grows into quite an obvious trend.

  1. learn how to play Angry Birds – 3 years

  2. transcribe speech – 7 years

  3. translate (versus an amateur) – 7.5 years

  4. read text aloud – 8 years

  5. write a secondary school level essay – 9 years

  6. explains own actions in video games – 10 years

  7. able to replace retail sales people – 15 years

This means that a 3 year old child going into nursery this academic year (2017), will leaving school, if this trend is accurate, without any available non skilled jobs and a waning skilled job market. This is going to put intense strain on the education system as a whole and the ability to ready the generations to come. We all need to know how to tackle an unwinnable scenario in the current framing of the problem. This will probably create a culture of life long learning in the coming decades.

 

A free lunch

There is a couple potential ways of combating AI automation of the majority of society's jobs, none really come to mind apart from the Universal basic income. The UBI is not a new idea and it is beginning to regain some traction in the political arena around the world. The general idea is unconditional free money for all, which to a lot of people that come across it for the first time sounds like an implausible utopian dream.

One of the first inceptions of this idea was in the 16th century from a man named Thomas More in his book Utopia published in 1516. Since then it has been batted about in various forms until its current modern experimental form we are currently witnessing today. A little known fact is Richard Nixon in 1969 almost passed a bill that would legislate a form of the UBI across the U.S. but before he could sign, one of his aides highlighted a study that pointed to higher divorce statistics due to women's financial independence. This was of course not true but has still been used repeatedly to discredit the idea.

There has been some stirring in the venture capitalist area on the subject of the universal basic income. Sam Altman is the President of Y combinator, the silicon valley seed accelerator (seed funding), who have invested in companies such as Airbnb and Dropbox. The y combinatoror blog has revealed a pilot project they are funding in Oakland, California. With the outcome of this scheme they are potentially going to fund a full fledged experimental scheme.

The current climate of the debate always comes back to an awkward fact to the “for” side of the debate, there is no long term data on whether or not it is beneficial or sustainable. This study and others like it going on around the world will provide the quantitative data needed to back claims, but will also give a qualitative incite of the transformative capabilities of the implementation of an UBI.

There is a fierce debate going on where ever you find the idea of the UBI, with valid points on both sides that should be considered and tested fully before implementing it into society. The last thing we want is causing the collapse of society with one swift blow. The main argument against, which is more a reasonable question than a stone wall, is how can you possibly pay for this? Well one of the ways that has been put forward is replacing the current welfare system we have today.

  • UK welfare budget for 2013/14 (as an example)

  • Total welfare spending £251billion

  • Population 64.5 million

  • of which were children 15 million

  • Per head (including children) £3891 annually/ £324 monthly

  • Per head (adults only) £5081 annually/£423 monthly

This is in no way a definitive way of paying for such a mammoth initiative, but it does shine a meek light onto the first step of the debate, of which is to big too cover on an already wide ranging topic on this post. This is a weighty subject to end on so it will have to be fleshed out in more depthin a later post, but if this brief introduction has peaked your interest then check out this compass for further reading and basicincome for a way to join in the debate.

Thanks for reading, don't forget to comment below to send me your opinions on the topics talked about in this post.

Exhaustedly,

Chris from RockdoveRehab

Hunt: A photographic game April 1, 2016

Hunt: A photographic game

April 1, 2016

An Introduction

This is a game I came up with in college, then resurfaced with a shiner presentation at university. It gives you a plausible reason to enter people’s lives without too much fuss and shows an intertwining network of social connections, while keeping communication lighthearted.

Concept

This photographic game is nomination based and lends itself to other art forms such as sketching, video content creation and journalistic writing. The hunter of the game nominates the first person, depending on what you are searching for within the hunt, it can be anything you can think of. As an example you nominate someone because they once made you laugh so much that a small amount of pee came out. We all have that person in our lives...and if you don't...do this exact version of the game. So, once you found this person take their portrait, their caption is the reason you chose that person. For example, [pictured: John Doe] "he made me wet myself...in a good way". You then get your nominee to nominate someone that made their heart skip a beat because of their humour, depending on the scope of the game. I normally say that for a your first game, you should be able to do all pictures in the same day. I'm working on a concept that may take months/years, depending on what I find. You keep going till a natural end or until you reach your self imposed target.

New Game?

When I was 10 I had an asthma attack that almost killed me, my lungs had seized up to an extent that it was impossible to get air into me. The doctors paralysed my lungs and a doctor called Drinker breathed for me, using a make-shift hand child pump for hours, until I was transported to Guys hospital (Greenwich hospital didn't have any child sized breathing apparatus; it was shortly closed, but I do wonder how many kids weren't as lucky as I was).

There is a couple of people that I have met in life that are noteworthy in the creation of my identity as it appears today, but none of them have directly saved my life unlike Drinker… Though I never met the person that saved my life, I still want to say ‘Thank You!‘ The idea is to track her down and start a new game of hunt with people that have pivotable influence on the nominees’ lives.

However, so far I can't find Drinker and there is a possibility that I never will, but to the person reading this, whoever you are, give your own version of this game a try. 

Nostalgically,

Chris at RockdoveRehab