I unexpectedly ended up spending way more time than expected this year in Silicon Valley 🌉 (San Francisco) and Toronto 🇨🇦 (big hotspot for AI & Neuroscience).
While I was out in the Valley, I actually forgot about Daniel Gross for a minute, but his name popped up again when I was visiting the University of Toronto - where OpenAI’s cofounder Ilya and Geoffrey Hinton ('Godfather of AI', recent Nobel Prize winner and co-creator of RNNs) both once worked. Turns out, Gross and Ilya even started a company together - SSI, unprecedented bigbang $1B raise.






I'm trying to cater to non-Valley readers as well, so you'll notice I explain even some common startup and engineering terms in layman's terms.
Notes: Since this was a private session with no photos or note-taking allowed, these insights are all from what I remember.
Why I picked Daniel Gross / About Daniel Gross
His journey as an outlier resonates with me
Like him, I launched my own thing at 18 and came from an unexpected country (before immigrating to the US). Daniel was YC’s youngest founder (18) when he co-founded Cue, which Apple later acquired for $40M
His craft changed my life trajectory forever
Joining his Silicon Valley accelerator, Pioneer was a game-changer for me. Its mission to find “Lost Albert Einsteins” helped “Lost Albert” (aka me) tap into my own 'E = MC²' potential, connecting me with lifelong friends from places like Brazil, the Philippines, Pakistan, and Canada — beyond the usual Silicon Valley circles.
His book: Talent opened my eyes to a new way of viewing success, showing me how focusing on energy over accolades and prestige can be a truer predictor of success.
Role model in AI / LLM
He co-founded AI Grant
AI Research and Market Trends
At first glance, the second derivative of AI progress fluctuates wildly — accelerating one moment and slowing down the next. (e.g., we saw a brief pause after ChatGPT's launch, but then Gemini, Sora, and Claude 3 hit the scene with new models in rapid succession)
The truth is, important research papers are emerging daily — it just takes time for people to pay attention and recognize their significance. There's typically a 6-12 month lag for industry adoption of new research techniques from research papers.
Current investment trends reveal a tendency for companies to piggyback on proven successful ideas rather than innovate original concepts, which means many valuable research breakthroughs go under-appreciated.
To succeed in AI, we’ve got to play it smart. Keep an eye on these overlooked ideas and strategically time your launch to maintain a competitive edge against fast followers.
Case in point from my experience, observations, and bets — recent Voice AI boom
Should we stay with current existing providers (AI companies from Toronto like Vapi & Voiceflow) vs. integrating directly to OpenAI real-time while it's still nascent (gaining the benefits of punctuation control etc., but risking building problems the universe will solve for you like concurrency and latency)?
Strike a balance between switching and staying put when you need more market insights. This is particularly relevant if you're in vertical SaaS.
The key here is to identify which LLM problems we should solve vs let the universe (OpenAI, Llama, etc.) solve for us.
Insights on AI Products and Era of Technical Founders
Secret: Tons of good ideas are out there in papers, but they’re not new—they're 4-5 years old.
Case in point:
Look at HumanEval; the top five weren’t foundation models but instead papers using existing models in novel ways (e.g., directly attaching to a debugger, better forms of search, Tree of Thoughts).
These ideas had been floating around for months, but no one took them seriously. Cognition (also an SPC alum like myself) really cared, and now everyone is going to copy them.
Based on observation, great AI companies have found a balance between building their own models and focusing on the product. For example, Perplexity focuses on optimizing the product rather than the models. However, great technical founders can build their own models if they want to and fully understand the science.
Great AI companies strike a balance between building their own models and focusing on the product.
For instance, Perplexity is laser-focused on optimizing the product rather than obsessing over building their own models. That said, great technical founders could build models if they wanted to, and they fully understand the science behind it.
2017 & 2018 were the marketer-founder’s era—B2B SaaS, Software 2.0 (software as a service). Back then, a simple Flask application was enough. Now, it’s the era of the technical founder, where "service as software" has raised the bar for the minimum viable product (MVP), beyond just a simple Python Flask app.
The technical founder’s role has shifted from simply architecting software to synthesizing the technical knowledge from papers and PDFs with market insights (needless to say, still also building the end-to-end product). The problem for most software engineers is that they’ve never encountered these kinds of back office processes before. The biggest hurdle is often uncovering one of these processes to tackle.
One of the best founders with some of the best companies are people that, if need be, can just make their own models or fine-tune or change the architecture of a model to make it multimodal.
On the surface, it feels like the barriers to launching AI products are lower. However, the competitive landscape is actually quite limited due to the rarity of individuals who excel in technical, research, and market execution. For example:
You need to have an in-house hallucination and guardrail mechanism for a specific domain where the dataset and metrics aren’t readily available on Hugging Face out of the box.
Vertical SaaS will be the next big trend—but the key will be combining business, research, and engineering. (Separate blog post needed for “why now” and what “changes” have propelled this shift.)
Tastes Wins in Business Opportunities
Daniel’s take: best builders tend to be the disagreeable type (they don’t just take someone else’s ideas — but think from 1st principles). That’s why Daniel doesn’t have an RFS (Requests for Startups).
At the end of the day, it’s all about feedback loops (to gather user insights) & building an excellent product — no one really cares about your research papers, algorithms, or hyper-performant Nvidia compute hardware.
What really matters is having someone who cares really seriously and has a really good taste (great article on this “Taste is eating Silicon Valley”) - rare breed. Again, contrary to popular opinion, number of absolute competitors you have is way smaller than you think.
Case in point on how having taste sets successful companies apart from their peers. For instance, companies like Stripe and Rippling succeeded by offering better products and communicating effectively through aesthetic designs and coherent API
Stripe
in 2010 & 2011, why should Stripe exist despite Paypal & hundreds of payment gateways are already there?
Stripe succeeded by offering more coherent API and a aesthetic designs
7 lines of code & 1 api call instead of 3 separate API calls (authorization, payment, settlement) for charging credit card
Ready-to-use sandbox environment.
(This resonates with my own experience when I built a Stripe-like payments company for emerging markets - started in Bekeley. When I was still with the company, one of our trusted investors, Amit Jhawal from Accel who also used to be a COO of Braintree & GM of Venmo, shared the same sentiment: the importance of a consolidated charge endpoint and customer object. So customers could keep the same integration, even with different payment methods like ACH, SEPA, direct debit, and e-wallets — definitely worth a separate post to share more insights)
Rippling >>> ADP
Importance of all-in-one + amazingly fast support
But, taste for the sake of taste isn’t enough—it has to be paired with actual value.
Taste in AI:
Model interface breakthrough - APIs - changed the game.
Application interface breakthrough - no change needed instead of forcing users to adopt other types of forms and buttons - stick with current interfaces - email, excel, phone calls, etc. Mindblowing experience
The incumbents are slightly off their feet. If you pick a domain that you are really excited about - you can accelerate your learning and build something in and you have the opportunity to be a number one in that world. I see parallels here with Palantir’s strategy of dominating a small market (also from Sam Altman, ex-YC president & cofounder of openAI). Betting that the market is also dynamic.
Common Startup Pitfalls to avoid / tarpits
making something people don’t want.
getting too married to particular solutions
Not intentionally reflecting and accumulating learnings to get a higher degrees of accuracy when making a bet (fav book on this Thinking in Bets by Annie Duke)
Daniel shared:
If you’re like a genius, like Steve Jobs - your bets will be 65% right.
Most people (including Daniel) started with 20% right
Talking to A LOT OF CUSTOMERS unless you already understand the psychology of the customers instead of solely building a product. Even though its often said: most people don’t really do this because talking to customers is painful
It gets you out of coding flow
You’ll usually learn things that are bad news
Talking to customers ≠ asking customers what they want. Counterintuititvely, engage and emphatize with them - learnings what their problems are firsthand, working in their shoes, try to be the customer
The alchemy of it is a little bit weird. But if you just get that right and you spend a lot of time with your customers, everything else will be smooth sailing
Common trap:
easy to focus on what’s sexy about new algorithm instead of “what’s the value for the customers that you’re building for?”. People on Twitter may care but none of your (paying) users care
Open Source Strategy for Startups
Daniel’s POV: Open source first companies often times are successful IN SPITE OF, NOT BECAUSE of the fact that they're open source
My view:
If your product is fundamentally different or there’s a 10x better / mindblowing thesis e.g. Supabase’s relational belief challenging Firebase. Open source makes sense since you need extremely fast feedback loops to validate your hypothesis
else, need a very strong reason why open source approach is needed because you’d have to handle some noise around “they are users but not buyers” and figuring out the path to be a paying customers, operationalize communinty
Essentially more pillars to deal with
Project-Community fit (top funnel)
Persona: developers
Measure: Github stars
Product-Market fit (design partners + usage)
Persona: Users
Measure: Downloads
Value-Market fit + retention
Persona: Buyers
Measure: revenue
e.g. Github (not open souce) >>>> Gitlab (open source), except for Linux -- open source but successful
Parameters to consider for closed source vs open source?
Great example: Redhat (package distribution of Linux)
Not fully starting with open-source but hybrid approach
Started open-source and also closed-source component at the same time (responsible for packaging it - distribution)
It worked in the fullness of time - Acquired by IBM for $34 Billion
Open source is the bedrock of Silicon Valley. Large companies def should support (e.g. WebKit supported by Apple and Google) - for startups - crucial to intentionally consider what you’re trying to achieve
Insights on AI agents
GPT 3.5 outperforms Claude 3 in humanEval tests.
Hard part about Vertical SaaS - no benchmark. So we have think through the metrics ourselves instead of just solely beating HumanEval and GSM 8K benchmark
Full agentic experience:
Crafting an agentic experience involves setting the model to retry tasks appropriately and deciding on the right number of retries. As more people try to run agentic tasks, they quickly realize each user request might require up to 100 model requests to ensure the best response. Techniques like self-consistency and voting/consensus mechanisms can help refine these interactions (more on self-consistency).
This shift may prompt companies to reassess GPU depreciation schedules since agentic tasks increase demand, changing how GPUs are valued.
Historically, skepticism around agentic applications kept product development focused on simpler tasks like autocomplete. Shifting to agentic tasks unlocks new behaviors and possibilities. For instance, if you ask ChatGPT to code, you might get a response after 20 seconds, only to find it’s wrong. At that point, you’re likely to lose focus, maybe switch to Reddit, read through posts, and then return to try again. This cycle can become a frustrating loop, showing how poorly calibrated agentic tools can lead to a disruptive, scattered workflow.