Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

But who's gonna produce that once Paramount owns HBO?

If they're half as likely to convert but four times wealthier, does it matter?

How would we know that's right based on entirely anonymous sets of people? I'm genuinely missing how we have any information about the societies these authors are from (which is why I asked!).

It seems to me that basically the only information on a low trust society we have is that OpenReview is itself low trust.


Nah, they probably just have Copilot as a bullet point on a slide, count that as "using AI", and are psyched for their next board meeting.

China uses Capitalism as a tool where the Party feels it would be beneficial (for the Party), and crushes it mercilessly when it gets in the way (other than this real estate problem they have right now).

In the U.S. we have mistaken Capitalism for a religion, and so it wags the dog, so to speak. Since our founding we have made some attempts at finding a balance between our use of the tools of Capitalism and socialism (in more the Democratic Socialism style, rather than the Communism style), and we had a good run in the decades after WWII. But starting with McCarthyism, and really picking up under Regan we have prided ourselves on adopting Capitalism as a religion, and it really shows up in both the income inequality as well as the increasing role of (and corrupting influence of) money in our politics/government.


"Neurotypical"/"Neurodivergent" does the same thing, it just specifies the domain of abnormality. It is still better than "normal", but the difference is of degree rather than kind.

If you are specifically distinguishing autistic and not-autistic, "allistic" is more specific than "neurotypical" (one can be neurodivergent and not autistic) and also avoids any implication than one side is normal and the other is not.


I've got two laptop in my new job. They sent me a windows one, when I asked for a linux one. Had to set up the laptop to begin working.

Honestly, I had to do a lot of workarounds to get comfy. There's annoying stuff I cannot uninstall.


(Nova dev here)

Nova's execution model is a lot friendlier to implement vs Prolog, for one.

One big reason reach for Nova are when I have something -very- state-machine shaped. It is quite good at that.

I'll try to come back later with more explanations


As an EE working in engineering 30 years, I ran out of fingers and toes 29 years ago trying to count the number of asocial, incompetent programmer Dark Triads who can only relate to the world through esoteric semantics unrelated to engineering problems right in front of them.

"To add two numbers I must first simulate the universe." types that created a bespoke DSL for every problem. Software engineering is a field full of educated idiots.

Programmers really need to stop patting themselves on the back.


All of them? I know of no Linux distros that do anything in particular to integrate AI.

Although knowing Canonical they might add something to Ubuntu sooner or later.


People have an actual world model, though, that they have to deal with in order to get the food into their mouths or to hit the toilet properly.

The "facts" that they believe that may be nonsense are part of an abstract world model that is far from their experience, for which they never get proper feedback (such as the political situation in Bhutan, or how their best friend is feeling.) In those, it isn't surprising that they perform like an LLM, because they're extracting all of the information from language that they've ingested.

Interestingly, the feedback that people use to adjust the language-extracted portions of their world models is how demonstrating their understanding of those models seems to please or displease the people around them, who in turn respond in physically confirmable ways. What irritates people about simpering LLMs is that they're not doing this properly. They should be testing their knowledge with us (especially their knowledge of our intentions or goals), and have some fear of failure. They have no fear and take no risk; they're stateless and empty.

Human abstractions are based in the reality of the physical responses of the people around them. The facts of those responses are true and valid results of the articulation of these abstractions. The content is irrelevant; when there's no opportunity to act, we're just acting as carriers.



Dude, go buy a used Mac mini for $150, sign your stuff with it, and move on.

We can talk all day about how this _shouldn’t_ be necessary, but you are tilting at windmills trying to get Apple signing to work without Apple hardware. You’ve definitely spent more of your time trying to make this work than if you’d just buy a cheap Mac mini.


> Surely you can think of something cool to build with that, which doesn't involve money.

People have been saying this for nearly a decade and many people with a great deal of motivation have failed to find any that worked and couldn’t be done better with traditional databases. It’s past time to ask if there’s actually any gold in them hills.


Y-yeah. HYPOTHETICALLY, this is something an adversary to the USA might attempt to do, and it would really kneecap the US if they were successful.

But would only happen if USA decided to totally financialize all sectors of its economy and make a small set of oligarchic corporations THE load-bearing element of its strategic capacity, leading us to chase market returns even if those returns totally kneecapped our ability to build anything at all of actual value.

Good thing we haven't done that!


Composites in that style are also typically very durable, often more than the original material. I think GP was more likely talking about constructions of pressboard and plywood which is (charitably) less durable.

No answer I give will be satisfying to you until I could come up with a rigorous mathematical definition of understanding, which is de-facto solving the hard AI problem. So there's not really point in talking about it is there?

If you're interested in why compression is like understanding in many ways, I'd suggest reading through the wikipedia article on Kolmogorov complexity.

https://en.wikipedia.org/wiki/Kolmogorov_complexity


It's fascinating when you look at each technical component of cognition in human brains and contrast against LLMs. In humans, we have all sorts of parallel asynchronous processes running, with prediction of columnar activations seemingly the fundamental local function, with tens of thousands of mini columns and regions in the brain corresponding to millions of networked neurons using the "predict which column fires next" objective to increment or decrement the relative contribution of any functional unit.

In the case of LLMs you run into similarities, but they're much more monolithic networks, so the aggregate activations are going to scan across billions of neurons each pass. The sub-networks you can select each pass by looking at a threshold of activations resemble the diverse set of semantic clusters in bio brains - there's a convergent mechanism in how LLMs structure their model of the world and how brains model the world.

This shouldn't be surprising - transformer networks are designed to learn the complex representations of the underlying causes that bring about things like human generated text, audio, and video.

If you modeled a star with a large transformer model, you would end up with semantic structures and representations that correlate to complex dynamic systems within the star. If you model slug cellular growth, you'll get structure and semantics corresponding to slug DNA. Transformers aren't the end-all solution - the paradigm is missing a level of abstraction that fully generalizes across all domains, but it's a really good way to elicit complex functions from sophisticated systems, and by contrasting the way in which those models fail against the way natural systems operate, we'll find better, more general methods and architectures, until we cross the threshold of fully general algorithms.

Biological brains are a computational substrate - we exist as brains in bone vats, connected to a wonderfully complex and sophisticated sensor suite and mobility platform that feeds electrically activated sensory streams into our brains, which get processed into a synthetic construct we experience as reality.

Part of the underlying basic functioning of our brains is each individual column performing the task of predicting which of any of the columns it's connected to will fire next. The better a column is at predicting, the better the brain gets at understanding the world, and biological brains are recursively granular across arbitrary degrees of abstraction.

LLMs aren't inherently incapable of fully emulating human cognition, but the differences they exhibit are expensive. It's going to be far more efficient to modify the architecture, and this may diverge enough that whatever the solution ends up being, it won't reasonably be called an LLM. Or it might not, and there's some clever tweak to things that will push LLMs over the threshold.


As Randall Munroe pointed out in https://blog.xkcd.com/2010/05/03/color-survey-results/, almost nobody knows how to spell "fuchsia" correctly. I only remember it by the mnemonic of it's fuck, but with an s.

Another post of his that isnt shadow banned!

I use example.com as my captive url to access public wifi - and noticed today it has some updated styling for the first time in a while. Someone at IANA did a little maintenance.

~2010: http://web.archive.org/web/20100407193039/http://example.com...

~2014: http://web.archive.org/web/20140430231310/http://example.com...

Today: http://web.archive.org/web/20251201020510/https://example.co...


feudal-capitalists

I kind of agree, but then the problem is not AI- humans can be stupid too- the problem is absolute power. Would you give absolute power to anyone? No. I find that this simplifies our discourse over AI a lot. Our issue is not with AI, is with omnipotency. Not its artificial nature, but how much powerful it can become.

Isn't it wonderful how much fiction can teach us about reality by building scaffolds to stand on when examining it?

Why are you doing it that way? That's the hardest way to get content and most likely to infect you along the way. Just torrent stuff.

> For the last 15 years of his life Yamauchi lived quietly, refusing requests for interviews,

Bless

Did not owe the media anything.


Yeah, it's definitely a huge bubble right now. I think a lot of crypto bros have marketed it more as a casino than a solution to a decentralized currency.

A hypothetical good use case: one can have a "science crypto" where the transaction fees go to funding science. By using the cryptos you want you can essentially pay taxes to causes you think are important


Some services have the down thumb


> So, my question to anyone in the Microsoft C-suite: have you ever tried to, like, actually use, like anything that you're selling?

Satya Nadella insists that Bing 365Pilot has supercharged his productivity, but whether he's high on his own supply or lying through his teeth is an exercise for the reader.

> Copilot consumes Nadella’s life outside the office as well. He likes podcasts, but instead of listening to them, he loads transcripts into the Copilot app on his iPhone so he can chat with the voice assistant about the content of an episode in the car on his commute to Redmond. At the office, he relies on Copilot to deliver summaries of messages he receives in Outlook and Teams and toggles among at least 10 custom agents from Copilot Studio. He views them as his AI chiefs of staff, delegating meeting prep, research and other tasks to the bots. “I’m an email typist,” Nadella jokes of his job, noting that Copilot is thankfully very good at triaging his messages.

https://www.bloomberg.com/news/features/2025-05-15/microsoft...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: