books

AI Rising

by Leslie D’Monte and Jayanth N. Kolla

216 passages marked

Cover of AI Rising

In reality, we are still leagues away from a fully autonomous, generalpurpose intelligence or a machine that can truly feel and emote. That said, AI has made great strides in the past couple of years after a long second winter, in my humble opinion. It is an exciting time, and one thing is clear: now that AI is here, it is going to stay and define the next chapter of how we live, play, learn, and work.

These three A's of Al are like the three stages of maturity or intelligence levels of AI.

As early as 1943, McCulloch and Pitts began to explore how artificial neurons can mimic the human brain. This was well before even a commercial computer existed. But it was Frank Rosenblatt's perceptron in 1958 that was modelled on the structure of the human brain. Although the perceptron demonstrated that it is possible to make an algorithm learn and associate inputs with outputs similar to how our brain learns (as per our current understanding of the brain), it, unfortunately, failed to progress further partially due to Minsky and Papert's 1969 hypothesis.

we have no clear common understanding of what Al is. Minsky, in The Society of Mind, defines AI as the science of making machines capable of performing tasks that would require intelligence if done by humans. On the other hand, some recent trends in ML/DL tend to define AI more as an ability or capability to learn rather than one that acquires new skills.

our understanding of data comes preloaded with concerns about privacy and ethics. While this is an important topic, I believe there are ways we can handle data without compromising privacy or ethics. For example, a recent development such as homomorphic encryption-where we can do ML training and inference on encrypted data-is one indication of the future of data for AI.

It took evolution 3.2 billion years to create Einstein. How long would it take AI?

how do we fulfil this desire to make machines as intelligent as humans, or even more? This question poses a major challenge to scientists since our intelligence stems from our brains. The reason is that the human brain is very complex, and we do not fully understand how it functions. What we do know is that the human brain comprises 80-100 billion neurons that help us think and feel, of course, with assistance from numerous glial cells.

Ken Hayworth is one such cognitive neuroscientist. President of the Brain Preservation Foundation, his long-term goal is to upload a human mind into a machine.

Preserving the brain using subfreezing temperatures a practice called cryonicsand resurrecting it when the technology is available is something you currently see only in movies like Demolition Man.

In our real world, Arizona-based Alcor Life Extension Foundation does offer a chance to preserve bodies indefinitely using cryonics. That's if you can shell out $2,20,000 per body. As of January 31, 2021, nearly 1,400 people have signed up to have their bodies preserved at Alcor.

ML, a subset of AI, provides systems with the ability to automatically learn and improve from experience without specific programming.

Microsoft partnered with a healthcare startup, Forus Health, to solve this problem by integrating Al-based retinal imaging application programming interfaces (APIs) into the startup's 3nethra devices using its own cloud and internet of things (IoT) solutions. This enables operators of the 3nethra device to get AI-powered insights even when they are working at eye check-up camps in remote areas with no or intermittent connectivity to the cloud.

For many businesses, Al is simply a sales pitch to make a product more appealing than it is-a trend that is known as "Al washing".

Al is not a single technology.

it's also important to realise that Al introduces a very high level of automation. ML, for instance, does not require explicit programming by a human. DL uses ANNs, also known as neural networks or neural nets, that simulate the human brain. Unsupervised ML can decipher patterns from humungous amounts of unstructured data and offer solutions without any human participation.

NLP, a sub-field of Al, helps machines process and understand human language in a given context to enable them to automatically perform repetitive tasks such as machine translation, summarisation, and ticket classification, among other things.

AI developers were raving about the potential of Generative Pretrained Transformer 3, or GPT-3, to produce humanlike text.

Google's Bard, which is powered by its Language Model for Dialogue Applications (LaMDA)

WebGPT is helping GPT-3 answer open-ended user questions with a text-based browser. OpenAl's neural (modelled on neurons in the human brain)

LLMs use transformer neural networks to read many words (sentences and paragraphs too) at a time, figure out how they relate, and predict the following word. However, while LLMs such as GPT-3 and models like ChatGPT may outperform humans at some tasks, they do not understand what they read or write, unlike humans.

Similarly, children effortlessly learn to speak multiple languages.

AI excels at repetitive tasks and can disrupt the way we live, work, and play. American AI researcher and writer Eliezer Shlomo Yudkowsky says, "By far, the greatest danger of artificial intelligence is that people conclude too early that they understand it."

"No computer has ever been designed that is ever aware of what it's doing; but most of the time, we aren't either."

Al, simply put, is the desire to make machines as intelligent as humans or even more. The concept dates to the 1960s. British polymath Alan Turing mooted building intelligent machines way back in 1950. His Turing Test, for instance, assesses whether a machine can think like a human or not.

That said, AI formally took shape during a 1956 workshop that was held to explore how machines could be made to simulate aspects of intelligence. The workshop was organised by John McCarthy, at the Dartmouth Summer Research Project on Artificial Intelligence, who is credited with the first use of the term AI in the proposal he co-authored for the workshop with Marvin Minsky, Nathaniel Rochester, and Claude Shannon.¹

Weak or narrow AI is excellent at performing linear tasks that require repetition and practice. It does not think like humans, as shown in scifi movies. Even completely autonomous (Level 5) driverless cars and trucks, however impressive they appear, remain stronger manifestations of a weak or narrow AI.

Narrow AI machines also do not have a moral compass. For instance, if a driverless car encounters two pedestrians jaywalking in its path, it may randomly choose to crash into either of them. A human, on the other hand, may choose to crash into a pole rather than hurt fellow humans, even if the pedestrian is on the wrong side of the law. Simply put, a driverless car does not have a brain or conscience, so it cannot think like a human or make moral decisions.

We are yet to see machines with "strong Al", also called true intelligence or artificial general intelligence (AGI). In fact, we may or may not see such machines in our lifetime, despite the talk of achieving technological singularity-the point when machines surpass humans in intelligence.

DL, an advanced ML technique, uses layered (hence "deep") neural networks (neural nets) that are loosely modelled on the human brain.

Neural nets are an ML technique that allows a computer to learn how to perform a specific task by analysing hundreds and thousands of examples. Their actions can be "supervised" by humans, "semisupervised", or even totally "unsupervised" by humans.

Neurons receive inputs layer by layer. The neurons in the first layer perform a calculation and send it (the output) to the neurons in the next layer. The process is repeated until there is overall output.

A node assigns a number known as a "weight" to each of its incoming connections. If that number is below a threshold value, the node passes no data to the next layer. Else, the node sends (or "fires") the number.

There is also a process known as back-propagation, which tweaks the calculations of individual neurons to allow the network to learn to produce the desired output.

When the first trainable neural network, the perceptron, was demonstrated by Cornell University psychologist Frank Rosenblatt in 1957, it had only one layer with adjustable weights and thresholds between the input and output layers. Today's neural nets, of course, are very sophisticated.

reinforcement learning-an unsupervised training method that uses rewards and punishments.

Ross Goodwin, an artist and creative technologist at Google, used the long short-term memory (LSTM) recurrent neural network (RNN) for his "Please Feed The Lions" project.

RNNs are a class of neural networks that enable output from an earlier step to be used as input in the current step. RNN models are typically used for NLP and speech recognition since they feature hidden states that remember some information about a sequence. This is applicable when a model must predict the next word in a sentence, which requires it to remember some previous words to complete this task.

London branch of Christie's, the world's largest auction house, put on sale the work of an algorithm developed by the French art collective, Obvious. The work was created using a model called a Generative Adversarial Network (GAN), which typically generates data from scratch, primarily images.

(Androids and avatars may look extremely realistic and lifelike, but when we examine them, they are not quite human. This makes people feel a sense of unease, strangeness, disgust, or even creepiness. This is referred to as the “uncanny valley".)

GANS, unfortunately, can be misused too. Given their potential to create fictitious people from scratch, GANS can also be used by lumpen elements and perverts to create fakes of celebrities too-better known as "deepfakes"-since GAN is a DL technique.

AI algorithms are also debating with humans. Project Debater, an Al tool from International Business Machines (IBM), engaged in the firstever live public debates with humans in June 2018 when it argued on the topic of whether we should subsidise space exploration or not. The model is touted as IBM's next big milestone for AI and has been in the works for almost nine years.

Watson Speech-to-Text API (application programming interface). Project Debater's knowledge base comprises around 10 billion sentences from newspapers and journals. Using Al NLP technologies, it can recognise the same concept, even when stated differently.

Project Debater begins by searching for short pieces of text from its database to build its opening speech-to either defend or oppose a motion. Next, it constructs arguments to support its case by removing redundant argumentative texts. It then selects the strongest remaining claims and evidence and arranges these by theme, thus creating the base of its narrative.

Project Debater also uses a knowledge graph that allows it to search for arguments to support the general human dilemmas that are raised by the debate topic.

The real promise of AI appears to be in speeding up the process of designing, testing, and even making potential new drugs.

Microsoft India has a partnership with Apollo Hospitals for an Alpowered Cardiovascular Disease Risk Score API in India. Microsoft also uses Al for the early detection of diabetic retinopathy to prevent blindness, which we explained in the earlier chapter.

In January 2020, for instance, Google's DeepMind published an article about its ML system, AlphaFold, in the research journal Nature.

But before we get excited, let's understand that Al's current source of power-data-is also its Achilles' heel. Researchers continue to be perturbed by the fact that neural nets are "black boxes" once they have any idea been trained on the data sets, even their designers rarely have how the results are generated.

But doctors could use the same data to create "designer" babiesgeneticallymodified children with customised traits like a music or math genius-for those parents who can afford this technology.

Our society will also face vexing questions such as: Who will take responsibility if a fully autonomous Level 5 car kills a humanthe owner, manufacturer, or software provider? And what if governments and terrorists begin using Alpowered weapons that leave no trace?

The report also recommended that China should stop unfair trade practices such as forced technology transfers and intellectual property (IP) theft. Chinese apps, too, have always raised suspicions about cyber espionage attempts and security risks globally.

About two dozen Chinese technology companies and venture funds, led by behemoths including Alibaba, ByteDance and Tencent, funded 92 Indian startups, including unicorns (those startups valued at $1 billion or more) such as Paytm, Byju's, Oyo and Ola, according to the foreign policy think tank, Gateway House.

However, current AI developments at least indicate that humans are unlikely to see a fully sentient AI-powered machine-one that can think, work, emote, create, and live like us-any time soon. Even the Hong Kongbased Hanson Robotics' Al-powered robot, Sophia, which is already a citizen of Saudi Arabia, is not even remotely close to being a human. Simply put, Sophia can be switched off; a human being cannot-we die from illnesses or simply old age.

Kochi-based startup Asimov Robotics developed a three-wheeled robot called "KARMI Bot" to serve food and medicines to COVID-19 patients, thus reducing the risk of infections for doctors and health workers.

Consider another example. When Northern India was attacked by locusts in June 2020, the Union Ministry of Agriculture claimed that India had become the first country to control locusts with the help of drones.

In India especially, the government's Aadhaar-enabled payments system and the Unified Payments Interface (UPI) have revolutionised the payments ecosystem. Currently, about 135 banks offer UPE Further, QR codes will continue to be used for payments, and loT is set to dominate micropayments by transforming connected devices into payment channels.

"Software is eating the world, but Al is going to eat software.

Financial service providers can, thus, rely on the dig se of a loan applicant by assessing their online shopping habits, and telephone bill payment history, or even social media profiles for determining creditworthiness. As most online transactions are done through a smartphone today, lenders are now easily able to track a prospective customer's online activity. Rather than using credit scores and credit history, fintech companies are now using something called a "social loan quotient" to assess a loan applicant and determine their creditworthiness.

As Srinivas Prasad, founder and CEO of Neusights, puts it:

Apollo Hospitals launched ProHealth-an Al-supported preventive healthcare programme in 2019-through its 370 centres across the country, including its hospitals and clinics. It partnered with UKbased DXC Technology to create this Al tool. ProHealth has been developed based on the experience of over 20 million health checks conducted at the Apollo network of hospitals.

So much so that Netflix CEO Reed Hastings has explicitly stated that his company's biggest rivals aren't Amazon, YouTube or even traditional broadcasters. Rather, it's "our need for sleep".

Going forward, Al systems will only become more autonomous.

They are so deeply embedded and integrated with the companies products that it's exceedingly difficult to distinguish them from the companies' core products. For instance, Netflix's value lies in adding more subscribers. But it's the power of algorithms that help Netflix engage and retain these subscribers. Simultaneously, it's also true that Netflix's success relies on providing good original movies with brilliant actors, and pushy, creative marketing and sales, among other things. In other words, the ROI here is in realising that AI is a critical driver for such an industry. In these cases, companies need to acknowledge that if they do not invest in AI, they will be caught napping and will eventually be ousted from business.

"The very best startup ideas have three things in common: They're something the founders themselves want, that they themselves can build, solut and and that few others realise are worth doing. Microsoft, Apple, Yahoo Google, and Facebook all began this way."

India is the third-largest ecosystem of startups in the world, with 24 unicorns valued at a little over $100 billion, according to NASSCOM. Startups valued at over $1 billion are called unicorns. A company valued at more than $10 billion gets the title of a Decacorn. Hectocorn is reserved for a startup valued at over $100 billion.

NASSCOM defines a startup as an "entity working towards innovation, development, deployment, and commercialization of new products, processes, or services driven by technology or intellectual property".

Startup Blink defines a startup as "any business that applies an innovative solution which validates a scalable economical model".

let's understand what an AI startup does. An AI startup falls under the category of deep tech startups, which NASSCOM defines as those tech startups "which create, deploy or use advanced technology in their product or service".

Broadly speaking, AI startups are effectively those companies that use AI tools such as ML, DL, NLP, NLU, computer vision, intelligent automation, and robotics to develop products and services and scale their own businesses.

Large technology companies typically have huge, proprietary data sets that span across many industries. And open-source community efforts are quickly democratising access to the most sophisticated ML algorithms. This makes it impossible for an AI startup to develop a competitive advantage solely around algorithm development. If a company is planning to compete with others using AI and ML, it better have the best data to solve a specific problem, failing which it should adopt a strategy that is different from its competitors.

Let's take the example of Niramai, which stands for Non-Invasive free from illness' in Sanskrit. The startup has developed a low-cost, software-based, automated, portable cancer screening tool that can be used in any clinic. The core technology has been developed using patented ML algorithms. Niramai's AI-based, radiation-free breast cancer screening test has received CE mark approval, ISO 13485 and MDSAP (Medical Device Single Audit Programme) international certifications. The CE mark approval indicates that the product may be sold freely in any part of the European Economic Area. The ISO 13485 and MDSAP certifications endorse medical device manufacturers for their compliance with international medical device quality standards and regulatory requirements. The test is already being used in renowned hospitals and clinics across various Indian cities, including HCG Hospital, Apollo Clinics, and HealthSpring Diagnostics.

It has been observed that homogenous startup teams, especially when comprised solely of Al researchers with no industryspecific experience, tend to fail more often.

There is significant business value in providing tools for other companies to develop and use Al technologies, essentially becoming the picks and shovels sellers in the Al gold rush.

Algorithmia, for instance, is an AI startup that has built a kind of app store for algorithms.

Speaking at the NASSCOM Technology and Leadership Forum (NTLF) 2021, Indian Prime Minister Narendra Modi said, "I have a message for startup founders. Don't limit yourself to valuation and exit strategies. Think how you can create institutions that will outlast this century. Think how you can create world-class products that will set the global benchmark on excellence. There can be no compromise on these goals. Without these, we will be a follower and not a global leader."

Companies need to collect all this data to train their Al models that will, when deployed, decipher patterns to help these companies predict consumer behaviour. The AI team also helps companies to analyse this data, draw insights, and suggest specific lines of action to grow sales, revenue, etc.-an exercise that is better known as data science.

The goal is to turn data into information, and information into insight -Carly Fiorina

Al can help increase farmer lending using credit risk assessment models based on farm characteristics and output data, which, in turn, can improve farmer income and provide money at lower costs.

Indian Software Product Industry Round Table (iSPIRT), which built the India Stack and UPI.

The abovecited NITI Aayog study insists that Al combined with robotics and the Internet of Medical Things (IoMT) "could potentially be the new nervous system for healthcare, presenting solutions to address healthcare problems and helping the government in meeting the above objectives"

The pandemic, ironically, was one of the primary reasons for fintech firms to leverage access to data, technology, and advanced analytics in 2021, almost making it a watershed moment for digital adoption in India.

Platforms such as Alethea Al or Fetch.ai are trying to incorporate language and speech capabilities to establish a dialogue with users.

The cryptocurrency capitalisation globally now exceeds $2.2 trillion, according to coinmarketcap.com. In comparison, India's GDP as of 2021 was estimated to be $3 trillion.

Cryptocurrencies may eventually make way for a Central Bank Digital Currency (CBDC).

Ironically, while cryptocurrencies remain a thorn in the side of governments, they appear to be comfortable with the underlying technology that powers bitcoin-blockchain.

The ministry has identified 44 key areas where blockchains can be applied, including the transfer of land and property, managing digital certificates, pharmaceutical supply chain, e-notary services, e-voting, smart grid management, and electronic health record management. The document has also taken into consideration blockchain-based platforms operated by governments in China, Brazil, the UAE, and Europe and highlights various government-led initiatives on blockchain that are underway.

15 banks in India have partnered to establish a new company named Indian Banks' Blockchain Infrastructure Co. Pvt. Ltd IBBIC). The idea is to use blockchain technology to process inland letters of credit (LCs). The banks include ICICI Bank, HDFC Bank, Kotak Mahindra Bank, Axis Bank, and SBI. Meanwhile, the Institute for Development and Research in Banking Technology (IDRBT), the technology and research arm of RBI, is also in the process of developing a model blockchain platform for banking needs.

"We're making this analogy that AI is the new electricity. Electricity transformed industries: agriculture, transportation, communication, manufacturing."

Farmpal is a relatively unknown agritech company based in Pune.

In September 2015, the General Assembly of the UN adopted the 2030 Agenda for Sustainable Development, which includes 17 Sustainable Development Goals (SDGs). Building on the principle of "leaving no one behind", the new agenda emphasises a holistic approach to achieving sustainable development for all.

In India, NITI Aayog has decided to focus on five sectors that are envisioned to benefit the most from AI in solving societal needs.

Agriculture: contributes around 16 per cent of India's GDP and consumes close to 49 per cent of the adult workforce, and yet it is in the midst of an existential crisis. While food production has shot up consistently year after year, the net farm income per hectare of land has not shown a proportional increase. The agricultural sector is facing challenges across all facets-production, distribution, and monetisation.

Data collection and social inequities: More often than not, one sees that structured data is better available from larger urban agglomerations, while there is very little structured data that comes from rural and poor India. Any analysis of such data will be inherently skewed towards the more affluent parts of the country, leaving large swathes of the country living with solutions that are far removed from their reality.

For instance, in the aftermath of the second wave of COVID, an analysis was done on the RTPCR tests conducted on visitors to the Kumbh Mela. Investigations revealed that nearly 1 lakh fake tests were conducted during the event in April 2021. What is inescapable is the fact that such large-scale data manipulation will lead to a completely incorrect conclusion. Any data modelling that is based on such data will therefore, be incorrect. Consequently, any Al solution that is developed using the model will be, at best, ineffective and, at worst, inimical to public welfare.

There have also been attempts to demonise Al and to project a society that is run by automatons-robots that are Al-driven, who lord over humans. Some research has also emerged of an "Al bias", which is inherently created by the human bias that is fed into the system and the self-sustaining data sets that seek to perpetuate the bias.

Max Tegmark, president of Future of Life Institute, says, "The concerns about AI are not about malevolence but competence." He points out that the best example of AI is the human species itself. While humans may not be the strongest animals on the face of the earth, they control it because they are the smartest. So, will it be possible someday to create an AI solution that could be smarter than the human who created the basic algorithms that eventually created the Al solution?"

"If you talk to a man in a language he understands, that goes to his head.

By the time the internet has a billion users, it is estimated that about 750 million will be local language users.

Further, Google's Zero-Shot Machine Translation system has been trained on 100 different languages with translation. Here's how the system works. To begin with, Google trained its multilingual system to share its parameters when translating between four different language pairs: Japanese to English and English to Japanese; Korean to English and English to Korean. The success of this method inspired them to explore if they could translate between a language pair that the system had never seen before. An example of this would be translations between Korean and Japanese where the Korean-to-Japanese and Japanese-toKorean examples were not shown to the system. Google realised that its system could generate "reasonable" Korean to Japanese and Japanese to Korean translations, "even though it has never been taught to do so They called this "zero-shot" translation.

In October 2018, Reverie launched "Gopal", an interactive voicebased NLP engine in seven Indian languages (Siri for Indian languages).

Never doubt that a small group of thoughtful, committed people can change the world. Indeed, it is the only thing that ever has"

Patient capital is another name for long-term capital. With patient capital, the investor is willing to make a financial investment in a business with no expectation of turning a quick profit. Instead, the investor is willing to forgo an immediate return in anticipation of more substantial returns down the road.

To close the gap and leapfrog other developed countries and become "the world's primary Al innovation center" by 2030, China recognised the need to address its entire Al ecosystem. China developed a comprehensive list of tasks that would enable them to achieve such a lofty goal. These included focusing on increasing the supply of Al innovation sources, forcefully (yes, you read this right) developing smart enterprises, promoting the use of Al for social governance and enhancing public safety and security capabilities, and strengthening the new generation of Al with the convergence of major scientific and technological projects, technological breakthroughs, and product development applications.

"Learning never exhausts the mind."

Consider these examples. TCS's Digitate launched a product called Ignio, the "world's first cognitive system for enterprise IT". Ignio aims to rapidly identify root causes and automate routine tasks. Infosys launched Mana to automate repetitive and commoditised software maintenance tasks. Building on this, Infosys later launched Nia, which can tackle more complex problems around revenue forecasting, product recommendations, and customer behaviour understanding, among others. Likewise, Wipro launched its in-house AI platform HOLMES, aimed at helping digital transformation through algorithmic intelligence and cognitive computing capabilities.

If WeChat was a person, it would be your best friend based on the amount of time you spend on it. So, how could we put an advertisement on the face of your best friend? Every time you see them, you would have to watch an advertisement before you could talk to them."

On December 20, 1990, Sir Tim Berners-Lee gave birth to the world's first website at a laboratory in the European Organization for Nuclear Research, better known as CERN. It was a simple page that explained how hypertext markup language, or HTML, worked. That page changed our world.

Five years later, India got its taste of the first publicly available internet service when state-owned Videsh Sanchar Nigam Límited (VSNL) launched the service on August 15, 1995. Those of us who were adults then will recall that we had to access the internet using modems that emitted guttural sounds as they painfully tried to connect us to cyberspace.

Web portals typically had a search engine and a payment gateway for their numerous services and products. Super apps, however, are a muchadvanced version of web portals since they have two superpowers-the first is the power of data analytics, and the second is AI.

A super app is the smartphone mobile app version of the all-in-one, or most-in-one, platform that caters to an average user's daily needs. In the offline world, think of a large mall, like the Mall of America or the Ambience Mall in Gurugram.

A good mall is designed not only to cater to the needs of its visitors but also to ensure that they spend a maximum amount of time (and money) once they enter. Over time, the mall ends up becoming the top-of-mind choice for consumers. This philosophy from the offline, brick-and-mortar retail world has been adopted in the online world by consumer internet companies-first in the desktop internet (Web 1.0)

Sometime leading up to October 2016, when mobile internet data usage around the world was surpassing desktop internet usage, companies around the world, especially technology companies, needed to come up with a "mobile strategy". As smartphones and mobile data led the mobile internet era evolution, the principle of ensuring that most of the internet user's time is spent on their platform was recognised by consumer internet and technology companies. This need, this trend of one app being the single platform to cater to most of the user's requirements or a platform of apps that do, is the current evolution of the super app.

Salt-to-software conglomerate Tata Group, too, is reportedly planning to launch a super app through Tata Digital to expand its presence in consumer-facing businesses. Tata Sons plans to invest at least $2 billion in its super app christened, TataNeu, and later raise an additional $5 billion from external investors by selling minority stakes in the digital venture, Mint reported on October 25, 2021, citing unnamed sources. The app, when eventually launched, is expected to aggregate all Tata Group services-grocery, lifestyle, electronics, healthcare, finance, etc.under a single "omnichannel" platform.

The Mahindra Group, however, is taking a different approach. It does not appear to be enthused with the idea of giving users access to multiple services of an organisation from within a single gigantic app. Instead, the Group, run by Indian billionaire Anand Mahindra, is mulling bringing together services such as farm, agriculture, finance, auto, and even used cars and tractors under a single digital roof and may christen it the "Farmer App".

Chinese company Tencent's mobile instant messaging app-WeChatevolved into a platform of apps, thus becoming the world's first super China is a mobile-first internet country. In August 2018, when the country's internet user base crossed the 800 million active users mark, 788 million were mobile users.¹

It's hardly surprising, then, that the first time I heard of anything close to being a super app was in August 2015 from Connie Chan's blog post on WeChat. Connie Chan was a China-focused analyst at a16z-a leading Silicon Valley technology venture capital firm founded by Andreessen Horowitz. In fact, that was the first time most people around the world got introduced to the concept of a super app, along with the distance WeChat had covered to evolve into one.

Known in Chinese as Weixin (1)"micro letter-WeChat is first and foremost a messaging app for sending text, voice, and photos to friends and family. Along with its basic communication features, WeChat users in China can access services to hail a taxi, order food delivery, buy movie tickets, play casual games, check-in for a flight, send money to friends, access fitness tracker data, book a doctor's appointment, get banking statements, pay the water bill, find geotargeted coupons, recognise music, search for a book at the local library, meet strangers around them, follow celebrity news, read magazine articles, and even donate to charity ….. all in a single, integrated app.

WeChat was developed in three months as a smallscale, experimental project on Tencent's campus with seven engineers under the aegis of founder Zhang Xiaolong. The first version of the world's largest app with simple messaging and photo-sharing features was launched in early 2011. The voice message function was added three months later.

WeChat started to cash in on the herd mentality.

Philosophically, while Facebook and WhatsApp measure growth by the number of daily and monthly active users on their networks, WeChat cares more about how relevant and central WeChat is in addressing the daily-even hourly-needs of its users. Instead of focusing on building the largest social network in the world, WeChat has focused on building a mobile lifestyleits goal is to address every aspect of its users' lives, including non-social ones.

The way it achieves this goal is through one of the most unsurfaced aspects of WeChat: the pioneering model of "apps within an app".

Allen Zhang, also known as the "Father of WeChat", talks about treating users with genuine empathy to ensure the stickiness of products.

We all belong to the same human species, and yet we differ in many aspects, such as height, weight, attitude, energy levels, wealth, and prosperity, to name a few. But we all have one thing in common, regardless of where we live-the number of hours in a day are the same:

The issue at hand is that every large mobile app and consumer internet company aspires to develop a super app. The way they typically approach the problem is by adding various features and functionalities to their core product and expecting-more like hoping-their become a super app. But what most companies seem to forget is that a super app is not just a product. It's an ecosystem.

Data has been called many things-the future, the new currency, the new oil. Regardless of the name you give it, the time is ripe for companies to have their "data strategy" or "data thinking" in place. Every company, going forward, is going to be a data and AI company. It's in this context that a super app will help them understand consumer behaviour across all their online and offline properties.

In 30 years, a robot will likely be on the cover of Time magazine as the best CEO. Machines will do what human beings are incapable of doing.

trends we are currently seeing in the workplace. I would like to list six such trends. These are: "Work from home is here to stay"; "Gender diversity and inclusion will increase"; "There will be more wage parity"; "We will have more work-life balance"; "We will be using more digital tools and virtual settings, all of which will reduce the need for physical travel"; and "Automation and smart machines will make us redundant if we do not reskill."

This simply means that automation and robots are here to stay competing with humans for some jobs while collaborating with others.

Machines, for instance, have been collaborating with humans for thousands of years, and more so after the Industrial Revolution, which introduced the assembly line production concept in factories.

The first modern programmable robot was the Unimate-an autonomous, pre-programmed robot that repeatedly performed the dangerous task. General Motors was the first company to install Unimate to work in one of its factories in 1961-to move pieces of hot 3 metal.³

And it was more than six decades ago that the US Navy secretly toyed with the idea of fully automating the making of electronic parts and subassemblies. Concerned that electronics could not be manufactured fast enough if a major war were to occur after World War II, it launched a project christened "Tinkertoy" in a compact little factory on the outskirts of Washington. The US Navy partnered with the National Bureau of Standards to develop an almost automatic assembly line for many electronic parts. But the project was shelved and is now a museum artefact.

Many factories all over the world and in India have been using computer numerical control (CNC) machines for years.

We may soon have hundreds and thousands of "smart" factories that will be completely run by robots, dispensing with the need for human workers. These are known as "lights-out" factories since robots do not need lights to work. The trend began more than two decades ago when Japanese robotics firm FANUC, considered to be the poster boy for such factories, inaugurated a lights-out factory in 2001.

It's only natural that, given the potential of these smart robots, most of us will perceive them as enemies who are here to take away our jobs. The fear of AI is so real that even the head of the Roman Catholic Church has asked for God's help. In November 2020, Pope Francis invited his flock to pray that "the progress of robotics and AI may always serve humankind".

If you believe in God, you may think that prayer will help you make robots toe the human line. But if you are an agnostic or atheist, you will acknowledge that it's us humans who have created these smart robots.

In the Colonial view, as the name suggests, we perceive smart robots as opponents that will become intelligent enough to surpass us at most tasks. They eventually could become our bosses and even enslave us.

The Collaborator view takes a lenient view of robots, seeing them as friends who are here to help us deal with mundane tasks and leave us with a lot more time for leisure.

That said, it's also important to examine why smart robots threaten us. I can list 10 such reasons. To begin with, a robot does not sleep or get tired-it can work all day and night and does not need to take sick leave, maternity, or paternity leave. Second, even when you take a robot off duty for maintenance, it can be instantly replaced by an equally able one. Third, a robot does not have to retire-the worn-out parts can simply be replaced and the software upgraded.

Algorithmically-driven agents are already participating in our economy. However, while these agents are automated, they are not fully autonomous. New autonomous software agents will function as the fundamental underpinning of a new economic paradigm that Gartner calls the "programmable economy" or "algorithmic economy".

Amazon's recommendation algorithm, for instance, keeps customers continuously engaged with its marketplace. Netflix's dynamic algorithm keeps people busy with bingewatching. Google-owned Waze's algorithm directs thousands of independent cars on the road.

AI and ML can test numerous demand forecasting models with precision while automatically adjusting to different variables, such as new product introductions, supply chain disruptions, or sudden changes in demand. Using Al systems, every single part of a product can be tracked from when it is first manufactured to when it is assembled and shipped to the customer.

Walmart, for instance, has cut its physical inventory from one month to 24 hours using sophisticated drones that fly through the warehouse, scan products, and check for misplaced items. Using algorithms that learn from experience to optimise logistics, BMW tracks a part-from the point it was manufactured to when the vehicle is sold-from all its 31 assembly facilities located in over 15 countries.

A case in point is that of Indian IT services provider Tech Mahindra, which introduced an HR humanoid (a robot that resembles a human) at its Noida Special Economic Zone Campus in Uttar Pradesh. Christened K2, this was the second HR humanoid from the Mahindra Group company-the first was launched at its Hyderabad campus in 2019.

Among other things, the Brookings report concludes that Al may end up creating a lot of ancillary jobs. "Just as the automobile created jobs not only in auto manufacturing plants but also in pumping stations, roadside restaurants, and the new suburban America that emerged, it seems likely that AI will have similarly far-reaching-if difficult to predict-indirect effects".

WEF has a similar outlook. According to WEF's "Jobs of Tomorrow:

When intelligent machines are constructed, we should not be surprised to find them as confused and as stubborn as men in their convictions about mindmatter, consciousness, free will, and the like."

Microsoft's AI chatbot Tay began tweeting "wildly inappropriate and reprehensible words and images" as soon as it was launched on March 23, 2016. As a result, there was a public outcry online, forcing Microsoft to get rid of the bot in just 16 hours.

in June 2017, researchers at Facebook Artificial Intelligence Research (FAIR) developed two AI chatbots. The aim was to have the bots chat with humans. Instead, the bots began talking with each other in a language that their own human creators did not understand.

Neurons receive inputs in layers. The neurons in the first layer perform a calculation and send it (the output) to the neurons in the next layer. The process is repeated until the final output.

A node assigns a number known as a "weight" to each of its incoming connections. If that number is below a threshold value, the node ignores the number and hence passes no data to the next layer. If the number the test, the node sends (known as "fires") the number. The weights and thresholds, thus, are continually adjusted until the training data with the same labels consistently yield similar outputs. There is also a process known as backpropagation that tweaks the calculations of individual neurons to allow the network to learn to produce the desired output.

When the first trainable neural network, the perceptron, was demonstrated by Cornell University psychologist Frank Rosenblatt in 1957, it had only one layer with adjustable weights and thresholds between the input and output layers.

Unsupervised learning is used when researchers ask algorithms questions that they can't answer. A DL model is given a training data set but no explicit instruction or label. In other words, the model is expected to automatically analyse the data by extracting patterns to eventually present a result that the researchers would not have known of.

For instance, banks use unsupervised learning to detect fraudulent transactions by looking for unusual patterns in a customer's purchasing behaviour. The process is also known as "anomaly detection". In such cases, the unsupervised DL model does the trick by flagging outliers in a data set.

When "clustering", an unsupervised model would look at training data that are similar to each other and group them together. An unsupervised model can also make decisions using "association". For instance, if you're shopping for tops, the model may suggest shorts, trousers, shoes, socks, and accessories such as belts too.

Semi-supervised learning, as its name suggests, is a training data set with both labelled and unlabelled data.

supervised and semisupervised DL models have humans in the loop. But that is not the case with unsupervised neural nets that use an opaque process to produce results. This implies that even their human designers rarely have any idea of how the algorithm generates the results.

As we see from the above cases, algorithms can jeopardise people's careers with unfair grades and hurt people's sentiments. These instances recur since algorithms are written by humans, thus reflecting our biases in the Al-generated results unless we are cognisant about it and take appropriate action to correct these biases.

It's erroneous to believe that just because an algorithm is math, it is unbiased. This realisation is critical since the decisions that biased algorithms make can have a direct impact on our lives.

Who will be held responsible in case of an accident by a driverless car-the car owner, the company that developed the car and its algorithms, or the algorithm that has taken the decision but cannot explain why?

An app called DeepNude used GANs to enable users to compromise the modesty of clothed women by manipulating their online photographs for $50 but was forced to shut down following a report by Vice and subsequent protests from the public. Such images can easily be used as fake revenge porn to damage a woman's reputation.

a site like Thispersondoesnotexist.com shows how GANS can be painters but can also create fictitious people. A GANpowered deepfake can easily alter our very perception of reality.

In 2016, MIT presented the Nightmare Machine: AI-generated scary imagery-an Al project that aimed not only at detecting but also inducing extreme emotions such as fear in humans. Users can visit the site and help the algorithm learn by voting. To date, the site has received over two million votes. You can watch the algorithm alter a pleasant image to a scary one in real-time.

A year later, MIT researchers presented Shelley, which they described as the world's first collaborative Al horror writer. Shelley is a DL AI that was trained on eerie stories collected from r/nosleep. And the following year, MIT trained the world's first AI-powered psychopath called "Norman" on Reddit and compared captions with standard imagecaptioning neural networks.

All these examples underscore the need for algorithms to be induced with a sense of ethics and fairness so that they can avoid biases and explain why they took a particular decision. In other words, AI cannot be opaque or a "black box".

Project Maven, for instance, is a Pentagon programme to build an Alpowered surveillance platform for unmanned aerial vehicles or UAVs. It is also called the Algorithmic Warfare Cross-Function Team, or AWCFT. Google initially aided the project with its Al expertise, but in 2018, thousands of Google employees wrote an open letter to the management, exhorting it to abandon the project. Google abandoned the project. But that's not the last we will hear of Project Maven.

we have the Centre for Artificial Intelligence and Robotics (CAIR), which does research and development in the areas of Al, robotics, command and control, networking, information, and communication security with a focus on developing mission-critical products for battlefield communication and management systems.

Simply put, why do we expect the answers provided by Al-powered algorithms to be right? Why should we not ask about the process they adopted to arrive at these answers or decisions? Or do we suffer from what is sometimes known as an "automation bias", allowing us to shift the responsibility and accountability of such decisions on computers?

It's hardly surprising that technology luminaries such as Bill Gates, Elon Musk, and even physicist Stephen Hawking, have cautioned that robots with AI could rule mankind. Raymond "Ray" Kurzweil, an American author, computer scientist, inventor, and futurist, in his 2006 book The Singularity Is Near, predicted, among many other things, that Al will surpass humans, the smartest and most capable life forms on the planet. He forecasted that machines would have attained equal legal status with humans by 2099.

But there are those who believe that AI machines can be controlled.

Even Ray has sought to allay such fears that smart machines will dominate humans by pointing out that we can deploy strategies to keep emerging technologies like AI safe, and underscoring the existence of ethical guidelines like Isaac Asimov's three laws for robots, which can prevent "at least to some extent" smart machines from overpowering us.

A February 2019 article in MIT Technology Review argued that it's very hard to fix the bias in algorithms due to three reasons. The first is that a bias could seep in when framing the problem itself. If this occurs, then the data collected will also reflect the bias. Finally, one could exclude certain groups of people while preparing the data, thus reinforcing the bias.7

mathematicians and statisticians from the University of Warwick, Imperial, EPFL, and Sciteb Ltd have joined hands to assist businesses and regulators in creating a new "Unethical Optimisation Principle"

They have laid out the full details in a paper titled "An unethical optimisation principle". According to one of the authors, Professor Robert MacKay of the Mathematics Institute of the University of Warwick, their suggested Unethical Optimisation Principle also suggests that it may be necessary to re-think the way Al operates in very large strategy spaces, so that unethical outcomes are explicitly rejected in the optimisation/learning process."⁹

The USC team realised that social media hate speech detection algorithms ironically amplify racial bias by blocking inoffensive tweets by black people or other minority group members. This, they reasoned, was because hate speech classifiers are oversensitive to group identifiers like "black", "gay", or "transgender", which are only indicators of hate speech when used in a specific setting. Hence, providing the algorithm with a context becomes critical.

Companies and governments are now gravitating towards a concept called "Explainable AI' (XAI), also referred to as transparent AI, which even has the backing of the likes of institutions like the US-based Defense Advanced Research Projects Agency (DARPA).

As Yan LeCun, VP and chief AI scientist, Facebook, points out, "Most of human and animal learning is unsupervised learning. If intelligence was a cake, unsupervised learning would be the cake, supervised learning would be the icing on the cake, and reinforcement learning would be the cherry on the cake. We know how to make the icing and the cherry, but we don't know how to make the cake.”

Al has the potential to deliver an additional $939 billion in value across public sectors of 16 major developed economies by 2035, according to a 2018 report by Accenture

The Partnership on AI, for instance, is an excellent effort to make AI socially responsible. It was established in late 2016, led by a group of Al researchers representing six technology companies: Apple, Amazon, DeepMind and Google, Facebook, IBM, and Microsoft. The alliance now comprises a community of over 50 member organisations and nearly 100 partners.

We humans have lived with our biases for thousands of years and will continue to do so in the future.

Technology may be propelling us into a new century with no plan, no control, no brakes' and it may now very well be the time for reprising control before we cross the fail-safe point"

May 2016-the sleek and gorgeous Tesla Model S is cruising along on autopilot. It's a bright sunny day, the perfect day to be out for a long drive or so one would think. But the bright Florida sun finds a flaw in the autopilot system, which fails to register the white truck turning into the car's path. The car crashes into the truck, and Joshua Brown, the driver, loses his life. The bright sunny morning is not quite as sunny any more.

Fast forward to 2017, and, in this instance, it is a Tesla Model X that the driver puts on autopilot and decides to play a video game on his mobile. The driver does not notice the car veering or the looming threat of a concrete barrier that it eventually crashes into-and another life is lost. 4

Technology is indeed careening us towards a precipice-literally and figuratively, and it is indeed time to take control. It is no surprise that Tesla's Elon Musk himself advocates regulation for AI "just to make sure that we don't do something very foolish".5

Every online platform, app, e-commerce store, and digital medium today track our browsing history, viewing or reading patterns, or shopping history.

Combine the power of surveillance through CCTV footage with facial recognition technology (FRT). Telangana has used the TSCOP app since 2018. The app, which collects fingerprint and facial data, is reportedly being misused against citizens without a warrant or just cause. 15 Mass surveillance is not a new fad in Telangana, nor is it the first state to indulge in it. In fact, the Telangana model of mass surveillance is reportedly on the lines of New York and Chicago. 16

From the Indian perspective, NITI Aayog' was first off the block through its national policy for AI in the form of a discussion paper titled "National Strategy for Artificial Intelligence" (NSAI 2018).20 The paper proposes #AlforAll or the democratisation of Al through permeation of the benefits of the use of AI in various sectors such as healthcare, agriculture, education, smart cities and infrastructure, and smart mobility and transportation. "Social and inclusive growth" is the catchphrase used in this document that focuses predominantly on what it refers to as the "trillions of dollars in opportunity for IT industry".

Several state governments have actively encouraged research and adaptation of AI-based solutions. Tamil Nadu leads by not only working on such adaptation but has also released in 2020 its Al policy paper titled, "Safe & Ethical Artificial Intelligence Policy 2020",22 the first of its kind from a state government in India. That which stands out with the Tamil Nadu Al policy document is its focus on ensuring that Al adaptation "is aligned to democratic values".

In January 2020, NITI Aayog published an approach paper titled "Al Research, Analytics and Knowledge Assimilation platform, abbreviated to AIRAWAT (AIRAWAT 2020),25 which sets out the road map for India's vision of an AI cloud, which was one of the recommendations in NSAI 2018.26 The key design philosophy for AIRAWAT, the 2020 paper notes, "shall be guided by the need to democratise access to Al computing infrastructure. This initiative appears to be timed along with the various calls for the localisation of data and encouraging indigenous innovations and startups, among other initiatives that, as mentioned above, appear to be focused on the public good.

Personal data protection enactments appear to be the preferred route that is being adopted not just by India but other jurisdictions also, that wish to ensure the protection of privacy and security of data collected, with the European Union's General Data Protection Regulation (GDPR) being treated as a gold standard for personal data protection legal frameworks. Privacy protection in India is sought to be addressed through the proposed special legislation, the Personal Data Protection Bill, 2019 (PDP Bill 2019). The Central Government also published a research paper on non-personal data, which will have a huge impact on businesses.

Personal data protection awaits a fillip through special laws, and the PDP Bill 2019 is expected to fill this lacuna. With the Supreme Court affirming that privacy is a fundamental right in its nine-judge unanimous decision in Justice K.S. Puttaswamy v. Union of India, it appeared that this special law for personal data protection would be expeditiously enacted and implemented. This has not been the case at the time of writing this book. The bill may see the light of day by the time this book is published. However, the PDP Bill 2019, which is being reviewed by a joint parliamentary committee, is nearing its last leg of review. While the present draft leaves a lot wanting, especially with expansive exemptions having been built into the draft, the hope is for a final enactment that will retain the focus on protecting the personal data of India's individuals.

There is a general misconception that India does not presently have legal provisions for personal data protection. This is patently incorrect, as Section 43A and 72A of the Information Technology Act, 2000 (as amended) (IT Act') provides the civil and criminal penalties, respectively, for negligent handling of personal and sensitive personal data. These provisions are further buttressed by the rules framed under Section 43A of the IT Act, namely the 'Information Technology (Reasonable Security Practices And Procedures And Sensitive Personal Data Or Information) Rules, 2011' (SPD Rules').

Predictive policing is no longer fictional, as in Minority Report but an actual AI use case. The use of predictive algorithms is being increasingly relied on both by the police and the judiciary, with the former using it to predict repeat offenders and the latter using similar parameters to evaluate the sentence to be imposed.

"See, the world is full of things more powerful than us. But if you know how to catch a ride, you can go places."

Chennaibased Dinesh Kshatriyan, who decided to adopt a virtual experience for his wedding reception in early January 2022, given the restrictions state governments imposed to stem the spread of the COVID-19 variant. While the actual wedding ceremony was an intimate real-world affair at his fiancée's village, the reception was held in a virtual representation of the Hogwarts School of Witchcraft and Wizardry from the Harry Potter universe.

Abhijeet Goel and Sansrati, who tied the knot a month later, went a step further by getting married in a 3D metaverse. The wedding which took place on Yug Metaverse, was conceptualised, organised and executed by the media agency Wavemaker India for ITC Ltd. and Matrimony.com.

While these are simply cases in point, it begs the question: What exactly is a metaverse? While there's no one definition, the metaverse is broadly a combination of both the physical and digital worlds where people can interact virtually with the help of VR headsets and AR.

Gartner Inc. defines a metaverse as a collective virtual shared space created by the convergence of virtually enhanced physical and digital reality. It is persistent, providing enhanced immersive experiences, as well as device-independent, and accessible through any type of device, from tablets to head-mounted displays.

Gartner predicts that by 2026, 25 per cent of people will spend at least one hour a day in the metaverse for work, shopping, education, social, and/or entertainment.

The hype around the metaverse began in earnest when Mark Zuckerberg made his Meta announcement on October 28, 2021, and insisted on wanting the new identity to be "metaverse-first, not Facebook-first" (remember, Google underscores Al-first, and Meta cannot work without AI). But it must be noted that the metaverse is not unique to Facebook. Many technology firms, including Microsoft, Nvidia, and Fortnite maker Epic Games, have been talking about their own visions of the metaverse for quite some time.

the term itself has been borrowed from Neal Stephenson's 1992 sci-fi novel Snow Crash, where the concept was used to describe a new kind of internet with VR.

Tom Cruise encountering interactive billboards and iristriggered direct marketing in the film Minority Report that was released nearly 19 years ago, or Tony Stark-the Marvel Comics superhero in the movie Iron Man- going a step further with his Al partner Jarvis (just a very intelligent system) providing him all the information he needs on holograms, computers, and also in Stark's Iron Man suits.

Pokémon Go, a free location-based AR mobile game for iOS and Android smartphone users, mixes online reality with the real world It allows players to use GPS and Google Maps on their smartphones to look for PokéStops at places such as public art installations, historical markers, and monuments, where they can collect Poké Balls and other items. And like PokéStops, gyms can be found at real locations in the world-all this without users needing a VR headset, which indicates that AR technology is coming of age.

You may recall Second Life, which can be said to be a different kind of metaverse within, or beyond, the internet. Developed by San Franciscobased Linden Labs in 2003, this multiplayer world became a digital craze when it allowed users to create their digital 3D avatars, socialise with others, play games, and explore multiple worlds called Sims.

Second Life's user base reached a record high of 1.1 million monthly active users in 2013 and is currently believed to have around 9,00,000 active users.

While VR is all about a world created solely on computers or online, AR still deals with the real world and has elements of the virtual world built atop it, akin to layers of information. AR technology was envisioned by Ivan Sutherland, who devised the first AR system in 1968, but the technology is blooming only now with customised applications in industrial automation, theme parks, sports television, military displays, and online marketing. Jaron Lanier, an American writer, computer scientist, and composer of classical music, is credited with popularising the term 'AR'. He and Thomas G. Zimmerman left gaming firm Atari in 1985 to launch VPL Research Inc., the first company to sell VR goggles and gloves. Mixed reality or MR, as the name suggests, mixes both realities in a bid to capture the best of both worlds. It's important to understand that companies are building their own metaverses using these technologies that have existed for more than three decades.

The popular and established ones include the likes of Decentraland, The Sandbox, Roblox, Fortnite's Epic games, and even Facebook's own metaverse called Horizon Worlds

In early 2022, for instance, Punjabi singer Daler Mehndi announced India's first metaverse concert that was held on Republic Day through a customised platform called Partynite. Users were invited to create their avatars and attend the concert. They had to walk around and find NFTs before an allotted time set by a timer. When the concert ended, a popup prompted the users to save all the collected NFTs and connect them to their ApnaDAO wallet.

the Madras Maharani Concert was held in a metaverse by NFT marketplace Jupiter Meta in association with radio partners Hello FM on April 15, 2022. With a showcase of spectacular visuals, immersive digital aesthetics, and soulful singing by singer and composer Karthik, every member of the audience was given exclusive music NFTs that can be traded. The concert and Jupiter Meta's initiative to launch the music NFTs saw fans of the performer throng the metaverse to hear old classics and new compositions in this unique setting, with their avatars taking in the special experience.

In September 2021, Facebook (now Meta) introduced Ray-Ban Storiessmart glasses that can capture photos and videos, help you listen to music or take phone calls. Built in partnership with Facebook and EssilorLuxottica, Ray-Ban Stories are already available in a few countries (but not in India). A month later, Facebook announced a $10 million Creator Fund to "encourage more people to come build with us as we continue rolling out Horizon in beta". Facebook Horizon is a place to explore, play, and create with others in VR. Further, lifelike Codec Avatars from Facebook Reality Labs is another ongoing research project.

Mumbai-based VR startup Tesseract, in which Mukesh Ambani's Reliance Jio has a majority stake, is promising a mixed reality future like that with its Jio Glass, Quark camera, Holoboard headset, and JioFiber.

The larger concerns, however, are around sexual abuse, violation of privacy, and misuse of data in the metaverse that need more attention from policymakers. For instance, in May 2022, a SumofUs researcher along with her colleagues entered the metaverse with the aim of studying the behaviour of users on Meta's social networking platform Horizon World. But within an hour after she donned her Oculus VR headset, she says her avatar was raped in the virtual space.³

Products and services that spring out of the metaverses will thrive on data collection and AI-powered data analytics, which can lead to gross misuse if there are no checks and balances.

In a November 9 interview with the Associated Press, Facebook whistleblower Frances Haugen opined the metaverse "will be addictive and rob people of yet more personal information while giving the embattled company another monopoly online". Haugen worked at Facebook for nearly two years after stints at Google, Yelp and Pinterest

Some companies have already taken action over NFTs that breach intellectual property and trademark law. Luxury fashion brand Hermès, for instance, sent a cease and desist letter to artist Mason Rothschild who sold NFT artwork inspired by the Birkin bag. The NFTs "infringed upon the intellectual property and trademark rights of Hermès and are an example of fake Hermès products in the metaverse," according to the company.

For one, advances in quantum computing can radically alter the speed at which data is mined and interpreted. Second, major tweaks to Al algorithms themselves are resulting in dramatically reducing the amount of data needed to train AI models, and we will certainly see more progress in the coming years

As Geoffrey Hinton, often referred to as the 'Godfather' of AI, himself put it: "The future (of AI) depends on some graduate student who is deeply suspicious of everything I have said.

← all highlights · 216 passages · AI Rising