As AI research continue to accelerate, powered by cheap and accessible cloud along with open-sourced research papers and code-bases, we march towards the much-talked about point in time - the Singularity. Singularity, (for those unaware of the term) will be the point in time when the capabilities of Artificial General Intelligence(AGI) technology will culminate to cross the threshold of “human-level intelligence” in all aspects that really matter. If you are pondering, “but that’s a remarkably unspecific event”, you are right. It is a conceptual event defined by technologists and futurists, and the term Singularity is coined to mark this incredibly significant point. However, to get a deeper understanding of the implications, we need to get specific. That’s what this post is about.


What constitutes Intelligence? Is it being good at problem-solving in general or getting good at solving a specific problem at a very fast rate? Artificial General Intelligence, as mentioned above, is defined as the technology that will be indistinguishable from humans in all aspects of intelligence. This raises a pertinent question as we move forward - What uniquely makes us human? What makes us special?

The validity of the answers we had to this age-old question, such as the ability to talk or the ability to cooperate to achieve a common goal, keep getting eroded as technology progresses. Have you wondered why Captcha is getting harder and harder every year? The line to separate human and AI is getting pushed further and further. Is there an end point?

We are compelled to draw a new line.

Does ‘True Creativity’ draw that line between a human and AGI?

State of the Artificial Art

Art has been an undisputed human stronghold for all of our known history. The ability to imagine is one of the humankind’s biggest assets that enables us to survive and thrive. The second decade of 21st century saw AI making inroads into this. The big shot that was fired in this direction was the invention of GANs by Ian Goodfellow in 2014.

GANs, short for Generative Adversarial Networks, are a pair of neural networks that are incentivized to compete against each other. On every iteration of training, the first one creates a new sample (of the kind of data it is trained on; e.g : images, text ) and the second one tries to identify if it is a real sample or a generated (by the first neural network) one. After a good enough number of iterations, the generator becomes so good at creating samples almost identical to the real data being used for training, in all aspects. GAN model positively surprised AI researchers with its uncanny ability to generate new samples that look so ‘realistic’. It was applied to images and not before long, DeepFakes flooded the internet.


The possibility of being able to generate realistic looking images using a GAN was too exciting to left there. Not only that, even the inventor of GAN himself didn’t fully understand the insides of how GANs manage to create images that look so real. So someone in the Google’s AI team took the next step by peeking inside the brains of a image-generating GAN to find out what is actually going on there. They asked the GAN to show the images it produced in all the intermediate training iterations. They were, for the lack of a better word, surreal. But also thought provoking. Almost like Art.

This was too interesting to not be talked about. AI researchers at Google went ahead and created the Deep Dream Generator and let everyone play with it. And those who were at the intersection of art and technology noticed. Technologically-aware artists started playing with the model, feeding it new kinds of data to see what it throws out. With new boost in research activity, GAN model became better and better at learning the features(e.g: style) of images it was being fed.

Daniel Ambrosi, a photographer who specializes in landscapes, saw the opportunity. What if he could feed a GAN his portfolio of landscape photographs and let it learn all the aspects of his portfolio images that makes his work unique?

Daniel trained a special GAN model with his portfolio of landscapes and let it generate art-ful images based on his style. He called them DreamScapes. You can check them out on his website here.

Dreamscapes: A Collaboration of Nature, Man, and Machine - Daniel Ambrosi

The first big ripple in the Art Community came when an AI-generated Art was sold for $432,500. A remarkable detail is that the famous auctioneer Christie’s had estimated that it will bring only $7,000. That to me, reveals the lack of understanding of the impact that Artificial Art is going to have on the existing art community. While the artists, collectors, and auctioneers debate about what makes art art, let’s move on to something more exciting.

The Dawn of Creative Machines

The success of GAN wasn’t limited to images. Building on top of GAN, OpenAI’s GPT2 achieved state of the art in long-form text-generation in Feb 2019. (you can try an implementation here). With the arrival of AI bots that draft stories and create impressionistic art for us, we are entering an era in which we will increasingly have to work harder to create something that’s uniquely human.

In the near-future (next 5 years), I speculate that almost all existing content will become a commodity, something generated by bots. I wouldn’t be surprised if in 2027, there’s not a premium charged in the market for human-generated content, in the same vein as “organic meat” today costs more than “industrial meat”. There will probably be ‘human conscious’ photographers who will professionally brand themselves for generating ‘uniquely human’ photographs.

To me, the near future seems more liberating than threatening. As creative AI bots come to our aid, they will offload the burden of manual drudge work that has traditionally been an inseparable component of creative work. For example- I have interesting conceptual visualizations that I am not able to put on a canvas with the skill of a trained painter - the skills of how to move my hand muscles in a particular way to draw that particular brush stroke on the canvas. Same applies to a sculptor having to know how to optimally use the hammer and the chisel (or the upgraded version of those tools) to put his vision to reality. AutoDraw (website) allows a little kid to make drawing even without being trained at making the perfect exact hand-muscle-movements to draw that artifact.

Learning to play the guitar is usually a popular skill in the university campuses for young males who want to earn the ‘cool’ tag and impress the females. I don’t have the patience to learn the manual component of playing the guitar- the memorizing and (time-consuming) the practice to put the right press-and-pluck on the guitar strings. But I do have the abstractive understanding of music-composition with guitar by manipulation of its physical features. More importantly, I have the imagination to create new music-compositions by altering the style, the rhythm, or the pace, or some other attribute, in a meaningful way.

Can narrow AI give me the power to play a full-song on the guitar just by having (good level of) conceptual understanding but no manual training?

AI Orchestra (website) allows you to conduct an orchestra with 20 (artificial) musicians playing 5 different instruments, so you don’t need to do the manual training at all (this experimental version is very basic though).

It’s not unfathomable that a stubborn 12-year old boy in 2041 would want his dad to buy him a creative-machine which he would use to create a lot of fun things that he will upload on some-kind of social-network to earn some ‘digital points’ for being cool (with the similar motivation as an Instagrammer in 2019).

Formal theory of Fun and Creativity

Can we embed curiosity, novelty, and surprise in machines? Jürgen Schmidhuber said yes to this question over a decade ago. If Ian Goodfellow is the cool kid on the block (the creator of GAN) in AI, the german AI scientist Jürgen Schmidhuber is the Godfather of 21st century AI. His formal theory of creativity laid out the theoretical framework of machines that can create Science, Humor, Art, Beauty.

For Schmidhuber, all this starts from meta-learning - the ability of machines to learn to learn and get better at learning what they are learning. This means complexity compression - finding novel patterns that are simpler representation of known information, resulting in more efficient storage and computation. You can watch this 10-min video of him explaining artificial creativity, with his usual humor.

Meta Learning

To paraphrase Schmidhuber, simple is beautiful. Getting intrinsic rewards for being curious about novel, non-random patterns that allows the learner to compress its existing data history even further, is what happens when scientists, artists and comedians manage to create something new and meaningful.

Schmidhuber believes that all progress is towards simpler representation of useful information. Human brains evolved to store thousands of trillions of images at a good enough resolution. Machines incentivized for generating good art or good jokes can be built by providing them an intrinsic rewards for discovering lower level data compression patterns that are more interesting than earlier known representation of the same information, and these discoveries can be measured by an interestingness metric.

That’s all Newton did, he says. Newton found a common principle that explained the interaction between all objects, big or small, based on their mass and acceleration. He compressed the needed storage of data to understand gravity interactions to handful of simple equations.

Is beauty also just an one step lower complexity compression of known information? An application of Schmidhuber’s creativity theory generated low-complexity minimal art, titled Femme Fractale.


Is even sentience, self-awareness, or to use the more popular word, consciousness just a byproduct of meta-learning as described above? Schmidhuber believes so.

Defining Consciousness

Is Consciousness the last frontier before Singularity?

Christof Koch is the Chief Scientist and President of the Allen Institute for Brain Science. Christof has been studying neurons for years – the atoms inside our brains that are responsible for perception, memory, behavior, and consciousness. His work has focused on documenting the neurons - thier diverse shapes, electrical behaviors, and their computational function within the mammalian brain, in particular in neocortex. Along with Christof’s mentor Francis Crick, he started the search to identify the minimal bio-physical mechanisms jointly sufficient for any one specific conscious percept. To put that simply, he’s searching where the modules of human brain become a whole and consciousness emerges.

His search revealed an interesting part of our brain that has been getting a lot of his attention lately - the claustrum (wikipedia). To put it simply, experiments with a consciousness meter(yes!) have linked the claustrum to the switching on and off of consciousness and it has been identified as the hotzone of conscious parts of our brain. After discovering that this part is connected to most modules and activity spikes here when the brain is undergoing a conscious experience, he has described claustrum as the probable ‘conductor of the orchestra’ in our brain.

( Image Source: NewScientist

Integrated Information Theory(IIT) of Consciousness (developed by Koch with Tononi and others) states that the fundamental unit of consciousness is experience. Experience as defined in IIT is subjective, specific, structured. It is made up of causal mechanisms but it is irreducible (you can read more on Christof’s view on consciousness here).

While the question of consciousness is hard and we can’t agree on even its definition, I can see the silhouettes of truly creative machines on the horizon. We should get ready for them.