Hackernews Daily

The Podcast Collective

đźš— Demand Soars for Waymo's Pricey Autonomous Rides Despite Uber and Lyft Competition

6/15/2025

Waymo's Autonomous Ride Demand

Despite higher costs, Waymo's autonomous vehicles maintain high demand. In San Francisco, Waymo's average ride costs $20.43 compared to Uber’s $15.58 and Lyft’s $14.44. Waymo delivers around 250,000 weekly trips across four cities, with customers valuing the novelty of driverless technology and the solo experience. Pricing fluctuates, with peak-hour costs up to $11 higher than Lyft. Safety remains a concern, yet 40% of users express willingness to pay more for Waymo rides.

Endometriosis: A Complex Medical Conundrum

Endometriosis involves endometrial-like tissue growing outside the uterus, causing chronic pain and sometimes infertility. Comparable to cancer in themes, endometriosis lacks a cure and remains underfunded despite its prevalence. Current treatments mainly focus on symptom management. The article underscores the need for increased research and funding to address this condition.

Critiques of Apple's LRM Limitations Paper

Gary Marcus critiques rebuttals to Apple's paper on Large Reasoning Models (LRMs), stressing that scaling won't achieve AGI. He argues for the necessity of neurosymbolic AI, combining neural networks with symbolic algorithms. Marcus contends that LRMs' reliance on training data reveals significant shortcomings, reinforcing a need for more robust AI architectures.

Peano Arithmetic and Goodstein Sequences

The article explores Peano Arithmetic's (PA) capabilities in addressing Goodstein sequences, noting PA's limitations in universally proving sequence termination, which requires stronger systems like Zermelo-Fraenkel set theory. It humorously illustrates encoding computations with PA using Lisp, while touching on PA's foundational limits set by Gödel's incompleteness theorems.

Hierarchies of Consciousness

A recent paper outlines a five-stage hierarchy of consciousness, spanning from rocks to humans, examining phenomenal and access consciousness. It argues against philosophical zombies and prompts discussions on consciousness in biological and artificial systems. Themes include the evolutionary importance of consciousness and its practical role in AI development, sparking philosophical debates on its necessity.


Waymo rides cost more than Uber or Lyft and people are paying anyway

Recent data from ride aggregator Obi highlights that Waymo’s autonomous rides remain in high demand despite being significantly more expensive than those of Uber and Lyft. Across four major cities, Waymo’s average fare was $20.43, compared to $15.58 for Uber and $14.44 for Lyft, with Waymo completing around 250,000 paid trips weekly. The findings suggest that even with an average premium of $5–10, a notable portion of users are choosing Waymo’s service, pointing to strong consumer interest in driverless technology.

The analysis attributes this premium to the novelty and privacy of autonomous vehicles, as well as the unique experience of riding alone without a human driver. Short trips under 1.4 km saw Waymo charging up to 41% more than its competitors, and peak-hour rides showed even wider discrepancies. While passengers cite the appeal of the technology and the ability to avoid interactions with drivers, pricing remains volatile, fluctuating with demand and trip length. Safety concerns persist, with 74% of users expressing reservations about traveling in robotaxis, though active remote monitoring and strict operational protocols are in place.

Hacker News commenters emphasize that autonomous vehicles are perceived more as a premium service than a cost-saving alternative. Some community members joke about paying extra to avoid chatty drivers, while others offer analytical takes on the pricing models, noting that Waymo’s system may lack the algorithmic refinement developed by Uber and Lyft over years of ride data. The overall tone combines curiosity about the evolution of driverless travel with skepticism regarding long-term affordability and the transitional state of the technology.

Endometriosis is an interesting disease

The article explores the multifaceted nature of endometriosis, emphasizing its resemblance to cancer in biological behavior and impact, while highlighting ongoing gaps in scientific understanding. Endometriosis is characterized by the growth of endometrial-like tissue outside the uterus, leading to chronic pain and infertility. While the prevailing theory of retrograde menstruation partially explains its origin, it fails to account for all cases, especially those found in non-menstruating individuals, prompting calls for more comprehensive research approaches.

Further analysis reveals that treatment remains largely symptomatic, with hormonal therapy and surgery aiming to relieve discomfort rather than provide a cure. Unlike cancer, endometriosis is considered benign, as it does not form expansive tumors or threaten life directly, yet it significantly impairs quality of life—sometimes to a degree comparable to metastatic diseases. Despite substantial prevalence and severity, research funding and awareness lag behind that of similarly common conditions like breast cancer.

Hacker News commenters underscore frustration at the lack of research funding and societal neglect, with many echoing concerns about misdiagnosis, inadequate treatment options, and slow scientific progress. Discussions also draw attention to the disease’s significant quality-of-life impact, the inadequacy of current explanatory models, and the missed opportunity for medical advancements through deeper exploration of endometriosis. Some participants share personal anecdotes highlighting diagnostic delays and the psychological toll, while others advocate for reframing the condition as a high-priority research topic.

Seven replies to the viral Apple reasoning paper and why they fall short

Gary Marcus offers a pointed critique of the responses to Apple’s recent paper detailing the limitations of Large Reasoning Models (LRMs), underscoring that neither model scaling nor reliance on external code resolves fundamental deficiencies in machine reasoning. He argues that retorts—such as invoking human fallibility or proposing ever-larger models—do not address the core problem: current LRMs consistently fail at generalization and robust reasoning outside their training distributions. The central insight is that scaling alone will not push these models to the level of artificial general intelligence (AGI), as general intelligence demands conceptual understanding, not just the ability to regurgitate or pattern-match from vast training data.

Marcus notably emphasizes the shortcomings of allowing LRMs to run pre-written code as a workaround, stressing that true general intelligence requires integration of neural and symbolic approaches—a neurosymbolic paradigm—if AI is to achieve reasoning capacity similar to humans. He frames the discourse as recognition of longstanding limitations, now being widely acknowledged because a leading industry player has confirmed them publicly. The critique highlights that, despite impressive progress in generative AI, core weaknesses remain, especially in algorithmic problem-solving and adaptability, areas crucial for AGI.

Hacker News readers engaged deeply with both the technical and cultural implications of these findings. A number of top comments echo skepticism about scaling as a route to AGI, with one highlighting the inability of leading models to solve basic algorithmic challenges like the Tower of Hanoi, reinforcing Marcus’s argument. Some viewed Apple’s critique as a strategic move to temper AI hype, with minor suggestions of rivalry coloring the debate. Several users injected levity, sharing examples of AI blunders—such as wildly incorrect mathematical outputs—to humanize and demystify the technology’s present limitations. Overall, the discussion reflects a community increasingly aware of the boundary between impressive language generation and genuine machine reasoning.

Peano arithmetic is enough, because Peano arithmetic encodes computation

The central argument underscores that Peano Arithmetic (PA) is computationally expressive enough to encode intricate processes, allowing it to prove many complex mathematical statements on a case-by-case basis. However, PA's limitations become evident with universally quantified statements like the termination of all Goodstein sequences: while specific sequences can be individually verified within PA, it cannot establish the theorem's truth for all natural numbers. This distinction highlights why more robust frameworks, such as Zermelo-Fraenkel set theory, are needed for certain kinds of universal mathematical proofs, reflecting intrinsic constraints illustrated by Gödel's incompleteness theorems.

A key illustrative point is the use of Lisp as an example for PA-based computation encoding. The article leverages Lisp's minimalistic syntax to demonstrate how parsing and bootstrapping a programming language can be systematically formalized within PA, bypassing the complications of operator precedence found in other languages. Through witty commentary and technical exploration, the author details how primitive operations in arithmetic suffice to construct basic computational workflows—emphasizing the surprising depth achievable even within foundational number theory.

Hacker News commenters resonated with the technical and humorous tone, joining in with their own observations about PA's limitations at ε₀ and the creative encoding analogies. The community discussion reflected a mix of deep dives into ordinals, debates about the reach of first-order logic, and an appreciation for the lighthearted Lisp examples. Many valued the way the piece clarifies why some true mathematical facts remain unprovable in PA, while others recommended further reading to explore ordinal analysis and the boundaries of formal systems.

How to Build Conscious Machines

The article synthesizes recent theoretical advances on consciousness, highlighting a five-stage hierarchy of conscious experience that ranges from inanimate matter to fully developed human minds. Central to the work is the argument that both phenomenal and access consciousness—subjective experience and the ability to act upon it—are fundamentally rooted in functional, physical processes. By rejecting the concept of philosophical zombies (entities outwardly identical to conscious beings but lacking inner experience), the paper aims to bridge gaps between philosophical speculation and empirical, functionalist models of consciousness.

Among the key secondary findings, the discussion emphasizes that phenomenal consciousness is not a mystical property but rather an emergent function, arising from specific organizational or computational capacities. The proposed framework attempts to reconcile philosophical puzzles with neuroscientific perspectives, positing that consciousness in machines is conceivable if similar functional hierarchies and mechanisms can be instantiated. The text also touches on the practical and ethical questions of building or identifying "conscious" machines—suggesting that functional equivalence, rather than biological substrate, could be the critical criterion for consciousness.

Hacker News commenters engage with the article through a lively philosophical debate, reflecting deep skepticism and curiosity over the nature and utility of conscious AI. Some participants argue that even if machine consciousness were theoretically possible, its relevance—outside of anthropocentric projections—remains unclear. Others question whether AI truly needs or benefits from consciousness, raising broader issues about the evolutionary advantages (or lack thereof) and potential ethical dilemmas. Overall, the comment section highlights ongoing tensions between scientific models and subjective intuition about inner experience, with voices split between enthusiasm for speculative research and caution about technological or definitional overreach.