What Is An AI-First Product?

Examining how AI changes software development and distrubition.

A few days ago, Duolingo CEO Louis von Ahn wrote a letter to his employees stating that he’s refocusing the company to be AI-first, calling it a “platform shift.” In the memo, he highlights betting on the mobile revolution in 2012 and how that era enabled Duolingo to thrive. He also instructed employees to use AI in their work, but I will focus on the “platform shift” in this essay.

You can read Louis’ complete statement below:

What’s interesting about this statement is that in Why Is Duolingo So Crazy, I noted that the education industry and particularly the languages sub-sector benefits from large language models and technologies like OpenAI’s Generative Pre-trained Transformer(GPT). Here’s an excerpt:

“Education is one industry where AI drives productivity and new product features. Duolingo has long relied on AI tools to build content for its users, and with the launch of OpenAI’s generative pre-trained transformer (GPT) model, it’s begun to do more.

Several Duolingo users have long expressed frustration about their inability to speak languages beyond the app’s content. Among actual language learners, this is a recognised issue with Duolingo that the company aims to rectify. In March 2023, it introduced new AI features in a subscription tier known as Duolingo Max. Built with OpenAI's GPT-4, Duolingo Max subscribers can engage in real-life conversations within the app while receiving real-time feedback on their language progress. By September 2024, it added another layer - video calls with Lily, one of the app's main characters.”

Louis’ letter highlights two significant things. He predicts how AI could change work (and a mandate to use AI) while showing us how AI changes software and software companies, particularly Duolingo. The announcements didn’t stop at his letter - the company announced 148 new language courses, expanding its seven most popular non-English courses to 28 user interface languages.

This is a big deal. If you speak any of the world’s most popular languages, you probably take it for granted that many people do not speak any of those languages. If Yoruba is my first language, how can I learn Spanish? Most Spanish courses are not tailored for Yoruba speakers.

AI enables Duolingo to adapt its courses to as many languages as possible. As long as it (or anyone else) can build a model for a language, multiple courses can be created.

There are 530 spoken languages in my home country, and thousands of languages worldwide.
Source: SIL International.

The productivity gains are also essential to note. The language announcement says:

“Developing our first 100 courses took about 12 years, and now, in about a year, we’re able to create and launch nearly 150 new courses. This is a great example of how generative AI can directly benefit our learners,”.

Louis von Ahn, Duolingo CEO.

I snicker at tech hype as much as anyone else, but this is crazy stuff happening here!

In today’s growth case study, I’m a pilot in a time machine attempting to study how different eras of computing hardware and software improvement have impacted the type of software built. They have influenced not just the way software is developed, but also how software is distributed. This isn’t necessarily a historically accurate timeline and does not encompass everything that has happened. Still, I’d love to highlight how different computing improvements have impacted software, hardware, and users’ lives.

This is not supposed to be a history of computers, but I’ll lean into my nerdiness and start with Charles Babbage’s analytical engine in the British Science Museum.

This is a computer!

Let’s take a break before we get deep into the case study. It takes me hours of thinking, researching, writing, editing, and creating to get these case studies published. If you like them and want to support this publication, please buy me a coffee. It would be even better if you made it monthly so I could drag myself to the library with a whole jug of coffee and churn out more case studies like this.

Computing Eras since the 50s.

  1. Personal computing era - Intel and Xerox PARC started a revolution with Intel’s work on silicon chips and Xerox PARC’s work on input devices and graphical user interfaces.


    Microsoft, IBM and Apple ran with it. Personal computing changed how people interacted with computers, from punch cards to keyboards, mice and ubiquitous graphic user interfaces in one generation.


    This meant new types of software to work with, keyboards and mice as input devices, and new kinds of media. Finally, you could play video, edit work documents or listen to audio if you connected a speaker to the computer. Before personal computing, people used really large mainframe computers like the PDP-10. Software in this pre-Internet personal computing era was distributed via floppy disks and compact disks bought in physical stores.

    Before the personal computing era (and even the PDP-10), computer users interacted via punch cards and data was stored in large rolls of magnetic tape. My mother has a fantastic story about seeing this in her university and marvelling in awe.

  2. Desktop Internet - Thanks to work by DARPA and CERN, these large computers and eventually personal computers started to talk to each other on something called the Internet. The Internet hit consumers and meant new types of software that could talk to other instances of the same software across the Internet. The World Wide Web (invented by Tim Berners-Lee), including protocols like TCP/IP, HTML, FTP and SMTP, meant software had to be built with collaboration and communication in mind. YahooMail (and other services), IRC, AOL, Google Search, Internet Explorer, and Skype are some of the winners from that period.

  3. Browsers - Once the Internet hit desktop computers, we needed software to visit and interact with websites and web apps. The 90s ended with Microsoft’s Internet Explorer slugging it out in court with the upstart Netscape. But they had barely hit the surface. Mozilla’s Firefox became popular in the early 2000s with the option to add new tabs to a browser and improve multitasking. Sometime within this period, Macromedia (eventually owned by Adobe) launched Flash, bringing interactive multimedia to desktop browsers.

    The browser era also brought us plug-ins, add-ons, and browser extensions. If you consider desktop browsers an agnostic software platform that works on any compatible device, you could build software to modify webpages or extract data from them. This was a massive boon for some software developers, who made everything from video downloaders to SEO trackers and text editors. Grammarly, one of the most popular tools from this era, launched in 2009!

    Plugins were a primary method of software distribution. Today, extensions and add-ons are still a way to distribute software within desktop browsers.

    Before the end of this era, HTML5 and CSS would be developed, resulting in the death of Flash (thanks to Steve Jobs’ letter) and responsive webpages, which allowed developers to build one website for desktop and mobile web.

    Asynchronous JavaScript and XML (AJAX) were also invented in this era, providing even more interactivity for webpages. Without AJAX, we might never have had web apps like MySpace and Hi5, which brought along the social media era. Facebook (and dozens of clones) became popular, allowing us to upload pictures, tag friends, and generally improve interactivity on the web.

  4. GSM - Phone calls and SMS predate instant messaging. Many don’t think of the global telecommunications revolution as a computing era. Still, it meant that for the first time, millions of people worldwide had a mobile phone and could carry it anywhere to make phone calls and send messages to friends and colleagues. You could even play games. Nokia’s Symbian would eventually dominate this era and largely dictate how software got built - all of the software had to be designed to fit a tiny screen and work on a mobile internet system called WAP. GSM drastically reduced the cost of global communications and brought the world closer together.

    GSM was also defined as “General Street Madness” in some parts of West Africa.



    I will slightly skip a little thing called the Telegraph, which sent messages across wires spanning the entire globe. You couldn’t have the internet and GSM without the Telegraph.

  5. Mobile Internet - As the Internet became more mobile, thanks to improvements in GSM technology (anyone remember EDGE?) and mobile phones, we saw a few interesting things happen.

    Research In Motion’s BlackBerry became the dominant professional (and, in some countries, personal) phone while everyone else tried to fit into this mobile-phone-with-full-computer-keyboard paradigm. BlackBerry’s win and Windows Mobile effort meant that, for the first time, professionals could edit work documents right on their phones and email updates to their colleagues, too! I had a Windows Mobile phone as a teenager, and typing full-text messages on a phone was pretty cool!

  6. Smartphones - While there were notable improvements in mobile phone technologies from Research In Motion, Nokia and Microsoft, none would make a dent like Apple’s iPhone.

    I had a Moto Q and then a BlackBerry after.


    The iPhone ditched the physical keyboard for a virtual one, making the entire mobile phone a screen. Because Apple did not think of its mobile phone as an extension of the desktop computer, it built something completely new, while taking many lessons from the Palm Treo. The iPhone changed how mobile applications were built - developers now had the entire mobile real estate to play with while Apple’s App Store changed how software was distributed. A famous saying from this era is “There’s an app for that”. 

    Eventually, Google’s Android Play Store and Apple’s App Store would become the dominant distribution platforms for mobile phone applications.

    Smartphones also gave applications new capabilities. Software could now interact directly with a mobile device’s hardware and pull data, including GPS and gyroscopes (running and Maps apps), camera (social, face-tuning, and image editing apps), microphone, and speaker(recording and calling apps). The smartphone made all kinds of new personal computing possible, like Snapchat, Instagram and Google Maps. Duolingo’s CEO is talking about this era when he references “mobile first” in the letter.

  7. Cloud computing - Improvement in data centre technologies and transmission speeds meant that files could be stored and retrieved from the internet faster and cheaper. Cloud computing brought us everything from Salesforce, Dropbox, YouTube, Netflix, Spotify, and millions of apps we use today. For users, it meant that we began to rely more on apps that stored data on the Internet and less on apps that stored data on our devices. I haven’t seen or used a USB flash drive in about 7 years. For software development teams, cloud computing meant that new tools like GitHub became even more important, and Figma changed on-cloud design.

  8. Gaming - While gaming isn’t a particular era, gaming has benefited from every computing era since the personal computer era. Sony and Nintendo are two huge winners here because they make gaming consoles - a specific set of computers dedicated to gaming. Software developers are winners, taking advantage of platforms like PlayStation, Nintendo, Windows OS, Android and iOS to distribute gaming applications. If you’ve ever played a game on your phone and then played the desktop or console version, you would realise that those are two separate games.

    Many analysts joke that gaming is the perfect way to predict the future of computing and software. It is difficult to argue with these analysts, as NVIDIA started out producing gaming chips, which are good at processing a lot of calculations.

  9. 4G and 5GI have to mention these technologies at this point. 4G and 5G dropped the cost of accessing the internet on mobile phones to a fraction of its previous price. They also made data accessibility faster, leading to the pivot to video era of 2015-2018.

  10. Silicon Chips—Silicon Valley is aptly named after the American computing industry’s association with the silicon transistor in microprocessors. Without Silicon and Moore’s law, these technological breakthroughs would be impossible. The improving power of chips has been a strong factor in every computing era.


    While Intel brought a personal computing boom, Qualcomm’s Snapdragon powered the mobile computing era, and NVIDIA’s chips and CUDA brought incredible cloud, gaming and research capabilities. Now, they usher us into an era of software with artificially intelligent capabilities.

Artificial Intelligence

When someone studies artificial intelligence, they learn many technical terms like large language models, algorithms, computer vision, machine learning, natural language processing, deep learning, and neural networks. These terms are huge fields on their own. However, they have combined and collapsed into one consumer-understandable term: artificial Intelligence.

This makes defining artificial intelligence a bit difficult since it is definitely not a chatbot, but I like how the UK parliament defines what AI is capable of:

  • interpreting, processing and generating realistic human-like speech and text.

  • interpreting, processing and generating images, videos and other visuals.

  • independently performing tasks in the real world, such as when they are paired with machinery such as robots.

These tools have been with us for quite some time. Here are some examples of AI usage in software.

Google’s Pixel smartphone was one of the early adopters of AI features for cameras. Google couldn’t launch flagship smartphones, so it launched mid-range phones with mid-range cameras and built on-device software that improved images, even at night.

Here’s an ad for the Pixel, highlighting its camera features:

Google’s method for improving images caught on, and many phone manufacturers began including camera software to improve photos automatically. In a 2024 Interview, Samsung’s EVP said:

There was a very nice video by Marques Brownlee last year on the moon picture. Everyone was like, ‘Is it fake? Is it not fake?’ There was a debate around what constitutes a real picture. And actually, there is no such thing as a real picture. As soon as you have sensors to capture something, you reproduce [what you’re seeing], and it doesn’t mean anything. There is no real picture. You can try to define a real picture by saying, ‘I took that picture’, but if you used AI to optimize the zoom, the autofocus, the scene – is it real? Or is it all filters? There is no real picture, full stop.

Patrick Chomet - Samsung EVP

Google isn’t the only company excelling in image recognition thanks to AI. Pinterest has capitalised on this and built a recommendation engine based on the images a user interacts with. While Pinterest is a great platform, it would make only a sizeable dent in social media applications.

On the other side of the world, a company called ByteDance would figure out how to use AI to understand the content and context of videos, group them into an engine, and present them in an app. It succeeded in China with Toutiao and had its sights on the world. After acquiring Musically(a video platform), ByteDance would launch TikTok globally. ByteDance’s understanding of video content meant it could deliver a social media feed without the user following anyone. This had not been done prior - every social media platform required users to follow (or friend) existing accounts before delivering a feed.

It’s important to say that TikTok might never have been successful without the ubiquity of smartphone cameras, the mobile internet, improvements in cloud storage, and 4G and 5G technologies. We can see how different technologies combine so that software developers can build and distribute new stuff.

Other uses of AI have quickly become commonplace. Financial applications typically have regulated Know Your Customer requirements when onboarding a new app user. Almost every finance app I’ve used in the last five years has successfully captured a picture of my face and matched it to my identity document before giving me access to the application. This feature is only possible with image recognition algorithms, thanks to the improvement in artificial intelligence models.

Another use of artificial intelligence is Google’s Waymo self-driving cars, which use computer vision to understand their surroundings and predict the next step. The company first showcased the cars to the public in 2010 and has begun delivering those cars in multiple cities around the world.

In addition to self-driving cars, many homes now have Google Nest speakers or Amazon’s Alexa speakers. Users interact with both devices by voice control - they can surf the web for information and answer questions, buy things online (in the case of Alexa) and control electronics in the home.

Something important to note here. Google is probably the most advanced for consumer and business usage among big tech companies that use AI for software features. It uses AI in its search engine, ads network, YouTube, Android operating system, Google Maps, Chrome browser, Google Translate, Google Lens, Pixel smartphone, self-driving cars, and Gmail. It has also successfully figured out how to monetise those products - even though Gemini for Google Workspace sucks.

In November 2022, an AI research organisation called OpenAI, released the first public version of ChatGPT - a generative artificial intelligence chatbot built on large language models. GPT means generative-pretrained transformer and is based on breakthroughs from a Google research team. OpenAI calls this - The Intelligence Age.

This sole event is responsible for the current AI boom. OpenAI’s GPT models and other models like Meta’s LLama, Anthropic’s Claude, and Google’s Gemini are reshaping how software developers build software. I will highlight three ways below.

  1. Software developers and teams can now rely on AI assistants to help them design, write, edit, and deploy code. While this isn’t perfect, almost every software engineer I know uses some AI tool in their workflow.

  2. Chatbots create a new surface area for software distribution. If you believe that Chatbots are the future of surfing the internet, then Chatbots are a new distribution channel for Internet users to find applications they wish to use.

  3. Artificial intelligence models like GPT, Veo2 (for video), or Stable Diffusion (for images) allow new things to be built. We can now build software that understands a user’s input context (image, video, audio, or text) and generates a response via multimedia.

So, what is an AI-first product?

Seeing how each era of computing shapes the type of software we can build and how we can build and distribute software, AI-first could mean multiple things.

  1. An application (or device with an application) collects information from the user, understands it and makes this understanding the base of the entire application. Meta’s Rayban glasses are an example of this - they could be used by differently abled people to interact with the world better. Some other good examples are Google Nest, Amazon’s Alexa, voice assistants, and apps that find you clothes when you upload an image.

  2. Applications that are more interactive and can present information in multiple media formats, since generative AI allows developers and users to create video, images, audio, and text.

  3. A developer could build software on top of chatbots (or other AI applications or models) or use chatbots as a vector for software distribution. There are many jokes about Chatgpt wrappers, but I must remind everyone that Typeform is an HTML and CSS wrapper and a billion-dollar company. 

  4. A software developer like Duolingo can create more content across multiple cultures, languages, and geographies, expanding its market and enabling more people to use its products.

  5. Software enables personalisation by default, but as we’ve seen with TikTok feeds, we could personalise it even further. As a product and growth marketing manager, I must consider segmenting different users, personas, and demographics, presenting them with the best software feature for their use case, and sending messages at the right time.

    Unfortunately, today’s segmentation technologies are limited in terms of intelligence. Ten thousand people signing up for my financial services application don’t need the same onboarding experience if I (the software developer) can figure out who they are and modify my offering.

While there is a lot of hype around chatbots (can’t lie, they’re pretty cool and helpful!), I am more excited to see what will be built in this era beyond bots. As noted above, each computing generation has brought incredible improvements in human and machine interactions. AI opens up new opportunities for creative builders to once again reimagine software interfaces and deliver value to customers.

Thank you for reading this case study. If you enjoyed it, please subscribe or share it with friends and colleagues by hitting the share button.

Note: Please be advised that the business case studies presented in this publication are intended for informational, entertainment and educational purposes only. You should do your own research and make your own independent decisions when considering any financial transactions.

Except otherwise stated, I own the vintage computer images captured in the British Science Museum.

Reply

or to participate.