From my observations, it seems that people who have done well in their careers have done so by being talented, hard-working, a little lucky, and riding a tailwind. These factors all play a role in how well they have done, and I’m sure the factors I have mentioned and observed aren’t exhaustive. Talent and luck aren’t really factors I can control. For example, with talent, I was born with certain natural abilities that could make me good at particular skills. If I had exposure to developing these skills, had interest, and practised, I could build them. With luck, it’s also a factor I can’t control; however, I can increase my chances of being lucky by preparing for opportunities and being in the right places geographically and sector-wise (places where other people have historically been lucky).

Hard work, in my opinion, is self-explanatory. It’s about how many hours I effectively spend working on something, building a skill, or performing a task. Riding a tailwind has been an interesting topic to me recently. It seems I could be talented, hard-working, and lucky and achieve good results, but if I want great success, I need to ride a tailwind—a BOOST, so to speak, which I will call tailwind success. How can I recognise tailwind success? As far as I can tell, it’s continuous compounding success growing at a higher rate than the broader ecosystem (world economy, industry, sector). I can choose the level of abstraction I want, influenced by underlying policy, discoveries, inventions, and engineering benefits. Society fully benefits from tailwind success to varying degrees. Consumers benefit broadly from tailwind success by consuming its products, which improve quality of life. Operators benefit more by consuming and utilising tailwind success products to produce other goods or services. The major beneficiaries (financially and in terms of impact) are the producers—they produce, utilise, and consume tailwind products. I am not sure how much of it is a conscious decision, whether one becomes a consumer, operator, or producer. For me, previously, it has been unconscious, and I want to change that. This is the main reason for writing this essay. Btw today is American Presidential election and everyone online is on the edge.

Looking back on my life, many things have changed. Some things have changed at a higher rate than others. For example, the food I eat or the clothes I wear haven’t changed much and have mostly remained constant; what has changed is my taste. However, there are other things that have changed at a rapid pace in my lifetime, and I have felt the desire to update them roughly every two years. Things that haven’t changed much in my lifetime or in my daily life I am going to classify as post-tailwind products (clothes, house appliances, and food). On the other hand, things that have changed and continue to change at a high rate I will call tailwind products. These are my internet speed, social networks, phone, and computer—all of which have strong tailwinds. As I mentioned before, tailwind products have the property of growing at a higher rate than the broader ecosystem, caused by an underlying advantage that can sometimes be explained by an empirical law. From my understanding, for social networks, it’s predominantly been Metcalfe’s Law, and for my phone and computer, it’s been Moore’s Law. I have enjoyed these rapid improvements, and they have enhanced my quality of life as a consumer, enabling me to connect to my friends and family at light speed. However, operators and producers that have ridden these tailwinds early enough have benefited more financially and in terms of impact.

This brings me to the new tailwind: artificial intelligence (AI) and its tailwind product, Large Language Models (LLMs), currently in chatbot form, namely ChatGPT, Claude, and Gemini. In my opinion, these are the tailwind products, and they are riding the scaling hypothesis—a belief that these LLMs are going to continue getting smarter and more useful indefinitely as they grow bigger, use more compute, and receive more data.

So, I have been thinking to myself: how can I maximise this new tailwind? As a consumer, I should become proficient in using these tools. The chatbots have improved a lot since ChatGPT’s release in November 2022.

No.1: They have become more truthful—they “bullshit” less (hallucinate less) and are now referencing internet articles, so I find their advice more reliable.

No.2: They have become multimodal and have longer context windows (they can process long text, images, video, etc.), allowing me to attach files and interact with them more effectively.

No.3: Voice mode—you can now more or less have a natural conversation with the chatbot. I really like this feature, and I use it quite a lot.

No.4: The chatbots are becoming more agentic. This is interesting because it means that, in the near future, the chatbot could perform tasks on my behalf, e.g., making an online order or programming an application by controlling my computer. Wow.

So, I guess learning how to use these tools effectively and integrating them into my workflow is important. As an operator, it would be great if I could find a way of integrating AI into a service or product that I am producing to add more value. However, this is easier said than done. Since the chatbots are becoming more capable and doing more things, I’m not sure where I can use LLM technology to offer a unique service that’s not already provided by a ChatGPT subscription—perhaps this is due to my current lack of imagination. One thing that LLMs are good at is processing large corpora of text, so perhaps use cases around customer service example for businesses are worth investigating, as well as admin work for my future business, such as processing documents (data analysis, writing summaries, making presentations, accounting, filing taxes, legal advice). I am going to continue observing and try to capitalise on any operational opportunities I recognise.

As a small retail investor, there could be some opportunities to financially benefit from the AI adoption wave. Going back to the tailwind for AI—the scaling hypothesis—chatbots need more data and compute to keep improving, and AI training companies are going to continue investing in training infrastructure until there is unequivocal evidence that scaling doesn’t work (i.e., it doesn’t make AI systems smarter and more useful). I am going to be humble and believe that scaling laws will persist. After a quick Google search, it seems that companies that own the most data are the big tech firms, namely Google and Meta, and they will probably do well in the AI age. Reddit is also a contender for providing human-generated data for AI training and has signed some licensing deals with AI companies Reuters article, WSJ article. For compute, the companies heavily involved in providing semiconductor technology are currently NVIDIA, TSMC, and Broadcom, and I expect them to continue doing well based on what I discussed.