[syndicated profile] daringfireball_feed

Posted by John Gruber

The New York Post (I’m not sure if I should tell you to take this with a grain of salt, because it’s the Post and their journalistic standards are low, or, to assign this extra credibility because it’s the Post, a right-wing Murdoch rag that Trump lackeys actually talk to):

President Trump is prioritizing taking control of the Strait of Hormuz as he grows frustrated with the lack of help from allies to force open the crucial waterway. And once Trump ends Iran’s reign of terror over the shipping route, he’s considering rechristening it the “Strait of America” or even naming it after himself, sources told The Post. [...]

Trump told a Saudi investor forum Friday evening in Miami that he might decide to call the Strait after himself, rather than America.

“They have to open up the Strait of Trump — I mean Hormuz,” Trump said. “Excuse me, I’m so sorry. Such a terrible mistake. The Fake News will say, ‘He accidentally said.’ No, there’s no accidents with me, not too many.”

I suspect there are going to be accidents soon, as he descends further into dementia and needs adult diapers.

Come at the king, you best not miss

Mar. 28th, 2026 01:16 pm
[syndicated profile] unsung_feed

Posted by Marcin Wichary

Column view cut its teeth on NeXT computers…

…and blossomed on early versions of Mac OS X…

…but where I thought it really shone was the first iPods:

This was perhaps the most fun you could ever have navigating a hierarchy of things; it made sense what left/​right/up/down meant in this universe, to a point you could easily build a mental model of what goes where, even if your viewport was smaller than ever.

It was also a close-to-ideal union of software and hardware, admirable in its simplicity and attention to detail. This is where Apple practiced momentum curves, haptics (via a tiny speaker, doing haptic-like clicks), and handling touch programmatically (only the first iPod had a physically rotating wheel, later replaced by stationary touch-sensitive surfaces) – all necessary to make iPhone’s eventual multi-touch so successful. And, iPhone embraced column views wholesale, for everything from the Music app (obvi), through Notes, to Settings.

Well, sometimes you don’t appreciate something until it’s taken away. Here are settings in the iOS version of Google Maps:

I am not sure why the designers chose to deviate from the standard, replacing a clear Y/X relationship with a more confusing Y/Z-that-looks-very-much-like-Y. They kept the chevrons hinting at the original orientation – and they probably had to, as vertical chevrons have a different connotation, but perhaps this was the warning sign right here not to change things.

I think the principle is, in general: if you’re reinventing something well-established, both of your reasoning and your execution have to be really, really solid. I don’t think this has happened here. (Other Google apps seem to use standard column view model.)

[syndicated profile] floggingbabel_feed

Posted by Michael Swanwick

 .



Several-to-many years ago, an editor at Science Fiction World in Chengdu, China, asked me to write an essay explaining the New Wave to the magazine's audience. Since I loved the New Wave and had a rudimentary understanding of the Chinese publishing industry's view of what could and could not be published there, I was happy to oblige. 

The essay I wrote was never published. There are many reasons why this might have happened so I will not speculate. But, having run across it while reorganizing my files, I thought I would share it with you.

Oh! and, since SFW is not only a magazine but a publishing house, I was asked to suggest some New Wave books they might consider publishing. The list I provided will be posted here on Monday.

Here's the essay. This is its first publication ever:


The New Wave in a Nutshell: Inner Space, Sharks on Leashes, the Acid-Head Wars, Genius Jailbirds, a Pregnant King, Shattered Taboos, a Morose Telepath, Shocking Excess, Literary Success, the End of the World and Its Aftermath, and Long, Long Titles

by

Michael Swanwick


At the time it felt like a revolution. A literary revolution, that is, which is the best kind of revolution because nobody dies in it and only feelings get hurt. The New Wave lasted for a decade (from 1965 to 1975, give or take a few years), during which it was all anybody in science fiction talked about, argued over, or denounced. My first fan letter, after it was all over, asked if I thought there would be another New Wave anytime soon. There is no doubt that it changed science fiction forever.

But what exactly was the New Wave?

Tough question.

Science fiction writers had always had a difficult relationship with literary writers and critics who, as a rule, looked down on them. They responded by declaring the supremacy of adventure fiction, asserting a need for heroes and straightforward stories plainly written, and declaring that literary fiction was “boring.” But in the early Sixties, some genre writers felt that the literary establishment had a point – that science fiction could be a lot better than it was and that the way to improve it was by using the techniques of serious fiction. They were a varied group and not all of them got along well. But they shared a common ambition to write SF both better than and significantly differently from what had come before.

In 1964, a young writer became editor of the British science fiction magazine New Worlds. Michael Moorcock had a new vision of science fiction. It would be centered not on outer space but on “inner space.” It would be set in the near future and deal not with spaceships and robots but with the workings of the human psyche. Its protagonists would be regular people, not scientists and explorers. And it would be comfortable with experimental prose, dystopias, entropy, and a pessimistic view of the future. Luckily for Moorcock, someone writing exactly what he was looking for was at that moment just hitting his stride.

J. G. Ballard was a boy when the Japanese overran Shanghai and placed him and his parents in an internment camp for the duration of WWII. He had no illusions about human nature. Ballard’s early books were disaster novels, like The Crystal World wherein plants, animals, and even people are slowly turning to crystal. He also wrote surreal stories collected in Vermillion Sands, about a resort town in which elegant women walk genetically modified land sharks on leashes, boutiques sell living dresses, and artists use gliders to sculpt clouds. But his work grew increasingly involved in psychological space. His novels include Concrete Island, in which a man is marooned, like Robinson Crusoe, only on a traffic island, and the extremely controversial Crash, about a subculture of people who are sexually aroused by automobile accidents.

Almost as central to the movement was Brian Aldiss. His Greybeard takes the form of a quest novel. But it is set decades after a massive nuclear accident has sterilized everyone on Earth. In a world without children, there can be no purpose to the voyage that Greybeard and his wife make other than to find a quiet place in which to live out humanity’s last days. Aldiss’s most astonishing work, Barefoot in the Head, is set in the aftermath of the Acid-Head War, fought with psychochemical aerosols that still linger in the environment. Everybody in Europe is continually in a drug-altered state, a fact reflected by the novel’s prose. Into this madhouse comes a young savior, Charteris, with a new mode of thinking based on the philosophy of Gurdjieff. But as he gains followers, Charteris comes to realize that they’re all looking forward to his martyrdom. He must find an alternative or die.

Moorcock himself tackled a similar theme in Behold the Man. A religious fanatic travels in time to study at the feet of Jesus, only to find that there is no such person. Disillusionment drives him half-mad, and he finds himself assuming the role of Christ, even though he knows how it must inevitably end.

The New Worlds crowd included some American writers then living in England. John Sladek was a brilliant satirist in an age almost too absurd to satirize. (He wrote a “nonfiction” satire of New Age mysticism, Arachne Rising, asserting that there is a thirteenth constellation in the Zodiac whose existence has been hushed-up by scientists, only to see the gullible accept it as fact.) Self-replicating machines run out of control in Mechasm, threatening to destroy civilization. Unfortunately, the only man who can stop them is locked in his office cafeteria, crouched atop a table floating in a lake of bad coffee from a malfunctioning brewing machine. It gets stranger from there.

In Thomas Disch’s first novel, The Genocides, aliens convert the Earth to cropland and treat people as pests to be exterminated. It ends not with survivors building a new world but with the last humans dying. When outraged fans objected, he urbanely explained that having survivors would “destroy the purity of the thing.” In Disch’s masterpiece, Camp Concentration, a journalist discovers first that a totalitarian American government is injecting prisoners with a tailored disease that turns them into geniuses whose discoveries can be exploited before they die, and then that he himself has been infected. The gradual transformation of the hero from normal intelligence to near-superhuman status is a tour de force of modern fiction.

Ever the contrarian, defying the New Wave proclivity for pessimism, Disch gave his novel a happy ending.

So far, the New Wave was a British phenomenon. In 1968 Judith Merrill transferred it to America via a much-discussed anthology of New Wave fiction titled England Swings SF. In the introduction, she wrote that what was happening in England was the most important development in all of science fiction. There were only two possible reactions to this. Those writers who wanted to write SF pretty much the way it had always been resented being labeled Old Wave and hated this new thing. Everybody else was mad to be a part of it.

Where Moorcock was chiefly concerned with what science fiction was about, Merrill cared more about how it was told. The best examples of how mainstream techniques could be imported into science fiction were John Brunner’s Stand on Zanzibar and The Sheep Look Up, both of which made the grim consequences of overpopulation bearable to read about by telling the story in collage form. The novels were shown through the eyes of dozens of protagonists, with excerpts from books, newspaper articles, and the like scattered throughout. Thus the hero of these books was not a single individual but everybody. The collage technique was old news to the literary world but stunningly effective when applied to science fiction.

Almost simultaneously, writer Harlan Ellison assembled what is probably the single most famous original anthology in the history of the field, Dangerous Visions. Ellison’s New Wave was all about breaking taboos: religious, political, sexual, literary, what-have-you. Every story he bought was a taboo breaker. Some of these have aged badly. Theodore Sturgeon’s “If All Men Were Brothers, Would You Want One to Marry Your Sister?” (long titles were a commonplace of the era) was a rousing defense of incest. This looked bold and daring at the time but today seems simplistic and wrong-headed. But several of the stories were classics. Some won major awards. One of these was by Samuel R. Delany.

Delany’s influence on science fiction can hardly be exaggerated, in part because while literarily innovative, he didn’t give up on the traditional pleasures of science fiction. Babel-17 is a good example of this. It was an exploration of the (since disproved) Whorf-Sapir Hypothesis that language shapes human perception, with a poet-linguist-and-starship-captain named Rydra Wong, zero-gee battles, space pirates, and enough fresh new ideas to float the entire career of a lesser talent. It was colorful, exciting – and as sophisticated as anything appearing in the mainstream.

In their early years, Delany was often confused with Roger Zelazny, another writer who combined spaceships and adventures on alien planets with erudition and a flashy prose style. (Zelazny turned to SF after failing as a poet.) Lord of Light is set in a world based on Indian mythology and culture. Everyone is effectively immortal – reincarnation is a simple matter of going to a temple where a machine will place your consciousness in a new young body. Technology, however, is controlled by the crew of the ship that originally brought humanity to the planet and they use it to pass themselves off as the Hindu gods. When the inevitable violent rebellion fails, who better to lead a peaceful revolution than the Buddha?

All the writers mentioned so far are male because at that time the field was overwhelmingly male. That was beginning to change. Two of the many women now entering science fiction, Joanna Russ and Ursula K. Le Guin, happened to be among the very best writers of their era. Both, unsurprisingly, were feminists. Joanna Russ’s debut novel, Picnic on Paradise, featured a heroine unlike any female protagonist previously seen in SF. In a galactic milieu filled with tall, beautiful, irresponsible people, Alyx is short, plain, tough, fierce, and competent. When war breaks out on a resort planet, she is assigned the task of rescuing a stranded group of tourists by guiding them through dangerous wilderness without using any modern tools, which would bring them to the attention of the warring factions. The greatest danger, however, comes not from the war but from the moral weakness of the tourists themselves.

Usula K. Le Guin’s The Left Hand of Darkness begins with the sentence, “The king was pregnant.” and presents a world in which people are sexless save for a few days each month, when their bodies randomly turn either male or female. This allowed Le Guin to examine the question of how much of our gender roles are biological and how much socially determined. It was an instant classic. In almost fifty years, it has never gone out of print.

Because of his essential strangeness, Philip K. Dick is always included among the New Wave writers, though there is little doubt he would have written exactly as he did if the movement had never existed. Over the course of dozens of novels, Dick obsessively examined the nature of reality as something other than what it appears to be. This, combined with some incautious statements in interviews, led to the impression that he was half-mad. Yet people who worked with him assure me that he was unfailingly rational. Unlike most writers, no single work stands out among his oeuvre. With Dick, you can start reading anywhere.

The last of the New Wave greats is Robert Silverberg, a man seemingly capable of writing well about anything. He received his greatest critical acclaim for Dying Inside. Its premise is simple. Selig has the extremely rare gift of reading minds. Yet, despite that, or possibly because of it, he has made almost nothing of his life. In middle age, he’s making a meager living writing term papers for college students. Then he discovers that his telepathic power is fading away. Alone and miserable, he has no choice but to come to terms with it. Telepathy has long been a power fantasy in science fiction. But Silverberg used it to create a meditation on the fact that everyone, no matter how powerful or insignificant – and Selig is both – must someday acknowledge their own mortality.

For a decade, exciting and innovative new works, like nothing ever seen before, appeared one after another, surprise upon surprise, on an almost monthly basis. It was an thrilling time to be a reader. Anything, it seemed, was possible.

Only it wasn’t.

Editors had long known that many New Wave authors did not sell well. But so long as the SF line as a whole made money, they were able to publish them anyway. Then came computers. It was now possible to track sales of every individual title. Overnight, it became obvious that conventional science fiction – the Old Wave – vastly outsold the New Wave. Word went down to cut the deadwood.

Some authors, such as R. A. Lafferty, the most original writer of his time, had to retreat to the small presses. Others quit writing. Yet others unenthusiastically switched back to the old stuff. At least one changed his name and wrote detective novels. British science fiction disappeared from American bookstores.

It felt like the end of the world.

In the aftermath, the conventional wisdom was that New Wave fiction was self-indulgent, plotless, and depressing. It’s true that there were excesses. Robert Silverberg’s time travel novel Up the Line featured almost non-stop sex. Brian Aldiss’s The Dark Light-Years was about trying to understand an alien species that communicated by defecating. A lot of short fiction by writers now long forgotten made no coherent sense at all. But it would be wrong to judge the New Wave by its worst examples.

If we judge the movement by its best, the New Wave was a tremendous success. Well before he died, J. G. Ballard was recognized by the literary establishment as one of Britain’s foremost writers. Stand on Zanzibar was a best-seller. Roger Zelazny’s work remained immensely popular. So did that of Delany and Le Guin, who are now darlings of Academia; the number of papers written about their work is legion. Silverberg was coaxed out of retirement by the largest advance ever offered a science fiction writer and wrote the immensely successful Lord Valentine’s Castle.

More importantly, the candle flame of literary ambition may have flickered but it never died. New writers were coming along, like James Tiptree, Jr. whose stories of biological determinism and alien colonialism were first collected in Ten Thousand Light Years from Home and Gene Wolfe, whose The Fifth Head of Cerberus can equally well be considered the last major work of the New Wave or the first of what came after. None of the new writers thought that SF and serious literature were two separate things. Nobody could tell them that science fiction couldn’t be about serious subjects or told in a literary way.

The New Wave had proved that wasn’t true.

When I responded to that fan letter asking if there would ever be a new New Wave, I said no. It simply wasn’t needed. And time has proved me right. What I didn’t know, however, was that Cyberpunk was about to happen and that for close to a decade, it would be all that anybody in science fiction talked about, argued over, or denounced.

But that’s another story, for another day.

 

Above: Marianne bought this carry-on bag for me in Canada. It's made from Italian leather and the Chinese flag was one of several they offered. I chose China because I'd never been there and hoped someday to visit. And I have! Several times. Our global interdependence can, on occasion, be a good thing. 

*

GLES 1.x transparency

Mar. 28th, 2026 05:48 am
[syndicated profile] jwz_org_feed

Posted by jwz

Dear Lazyweb, why doesn't alpha blending work when lighting is enabled on Android? Transparency works with glColor but not with glMaterial.

GL_VERSION in the Android simulator is "OpenGL ES-CM 1.1 (4.1 Metal - 88.1)".

This works fine on iOS and Cocoa, so it's not strictly a GLES thing, just Android. GLSL is not involved.

Test case:

Bool lights_p = time(0) & 1; glClear (GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glEnable (GL_BLEND); glDisable (GL_COLOR_MATERIAL); glBlendFunc (GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA); #define glColor4fv(v) glColor4f (v[0], v[1], v[2], v[3]) GLfloat c1[] = { 1, 0, 0, 0.5 }; GLfloat c2[] = { 0, 1, 0, 0.5 }; GLfloat v[] = { 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, }; glVertexPointer (3, GL_FLOAT, 0, v); glEnableClientState (GL_VERTEX_ARRAY); if (lights_p) { GLfloat amb[] = {0.5, 0.5, 0.5, 1}; glLightfv (GL_LIGHT0, GL_AMBIENT, amb); glEnable (GL_LIGHTING); glEnable (GL_LIGHT0); glColor3f (0, 0, 0); } else { glDisable (GL_LIGHTING); glDisable (GL_LIGHT0); } if (lights_p) glMaterialfv (GL_FRONT_AND_BACK, GL_AMBIENT_AND_DIFFUSE, c1); else glColor4fv (c1); glDrawArrays (GL_TRIANGLES, 0, 6); glPushMatrix(); glTranslatef (0.5, 0.25, 0); if (lights_p) glMaterialfv (GL_FRONT_AND_BACK, GL_AMBIENT_AND_DIFFUSE, c2); else glColor4fv (c2); glDrawArrays (GL_TRIANGLES, 0, 6); glPopMatrix(); glDisableClientState (GL_VERTEX_ARRAY); glVertexPointer (3, GL_FLOAT, 0, 0);

(no subject)

Mar. 27th, 2026 09:37 pm
shadowkat: (Wonder Woman)
[personal profile] shadowkat
Another day, another dollar - or several dollars - hence the reason I got up at 6 am, got on the subway around 7 am, and lugged my sorry old ass to the tip of Manhattan and the eighteenth floor of a steel and glass building to work. My thirty-odd years in NYC has resulted in jumping between all sorts of office buildings and in just about every borough but Staten Island. (Which is good thing, because I'm not entirely sure how I'd commute to Staten Island from where I live?) I finally made into an office with a window and a few, and some semblance of privacy, it's still a cubicle - but at least it's a nice one.

Political Interactions on Threads or social media (that is not Dreamwidth), which is why I'm rarely on Threads? It makes me wish there were a lot more Darwin Awards.
humorous if it wasn't true, which alas it is, so...anxiety inducing right now, humorous about 30-50 years from now, assuming of course anyone is still alive and we've not destroyed ourselves yet? )

Shower. Bed. I'll write more another day, hopefully not about politics.
[syndicated profile] daringfireball_feed

Posted by John Gruber

Oliver Darcy, reporting for Status (paywalled, alas):

According to the data obtained by Status, BI ended 2023 with roughly 160,000 paid subscribers, a drop of about 14 percent from the prior year when it boasted about 185,000 subscribers. The slide did not stop there, however. In 2024, it closed the year with roughly 150,000 subscribers, a further six percent decline. And in 2025, the number fell again, to about 135,000 paid subscribers — another 10 percent drop.

All told, over roughly three years, BI saw its subscription base plummet by about 50,000, or a jarring 27 percent.

Not the sort of momentum you want.

[syndicated profile] daringfireball_feed

Posted by John Gruber

Lorenzo Franceschi-Bicchierai, reporting for TechCrunch:

Almost four years after launching a security feature called Lockdown Mode, Apple says it has yet to see a case where someone’s device was hacked with these additional security protections switched on.

“We are not aware of any successful mercenary spyware attacks against a Lockdown Mode-enabled Apple device,” Apple spokesperson Sarah O’Rourke told TechCrunch on Friday.

[syndicated profile] daringfireball_feed

Posted by John Gruber

Apple Newsroom:

Beginning this summer in the U.S. and Canada, businesses will have a new way to be discovered by using Apple Business to create ads on Maps. Ads on Maps will appear when users search in Maps, and can appear at the top of a user’s search results based on relevance, as well as at the top of a new Suggested Places experience in Maps, which will display recommendations based on what’s trending nearby, the user’s recent searches, and more. Ads will be clearly marked to ensure transparency for Maps users.

Ads on Maps builds on Apple’s broader privacy-first approach to advertising, and maintains the same privacy protections Maps users enjoy today. A user’s location and the ads they see and interact with in Maps are not associated with a user’s Apple Account. Personal data stays on a user’s device, is not collected or stored by Apple, and is not shared with third parties.

The privacy angle is good. I don’t want to take that for granted, because few, if any, of Apple’s $1-trillion-plus market cap peers have such devotion to user privacy.

But more and more it’s becoming clear that while Apple’s devotion to protecting user privacy remains as high as ever, their devotion to delivering the best possible user experience does not. Here’s Apple’s own screenshot showing what these ads are supposedly going to look like. It looks fine. But these ads seem highly unlikely to make the overall experience of using Apple Maps better. Perhaps, in practice, they will not make the experience worse, and it’ll be a wash. But I can’t help but suspect that they’re going to make the experience worse, and the question is really just how much worse. The addition of ads to the App Store has made the experience worse.

We shall see. I’m not going to prejudge the actual experience, and you shouldn’t either. I also do not begrudge Apple for wanting to monetize Maps. But if the addition of ads does make the Apple Maps experience worse, why won’t Apple let us buy our way out of seeing them? Netflix doesn’t force us to watch their ads. YouTube Premium is arguably the best bang-for-the-buck in the entire world of content subscriptions. Why should Apple One subscribers still see these ads in Apple Maps?

Are you a mod or a rocker?

Mar. 27th, 2026 11:07 pm
[syndicated profile] dr_drang_feed

Posted by Dr. Drang

[Equations in this post may not look right (or appear at all) in your RSS reader. Go to the original article to see them rendered properly.]


I’ve been working my way slowly through Reingold and Dershowitz’s Calendrical Calculations. This week I hit Chapter 11 on the Mayan and Aztec calendars and came across a notation for modulo arithmetic that wasn’t familiar to me.1 I figured I’d write about it here on the off-chance that any of you would find it interesting. Also, to make it stick in my head a little better.

The notation came up in the section on the Haab calendar, a sort of solar calendar that the Mayans used along with the Tzolk’in and Long Count calendars. The Haab calendar has 18 months of 20 days each and then a 19th sort-of-month with just 5 days. There’s no year number in the Haab calendar, so there’s no way to convert directly from the Haab calendar to other calendars. But there is a way to get a date in another calendar that’s on or nearest before a given Haab date.

Reingold and Dershowitz use a “fixed” or “RD” calendar as their way station between all the calendars. It’s a single day number that counts up from what would have been January 1 of the Year 1 in the Gregorian calendar if the Gregorian calendar had existed back then. In this system, 0001-01-01 is Day 1 and today, 2026-03-27, is Day 739,702.

The function that finds the closest given Haab date on or before a given fixed date is called mayan-haab-on-or-before, and it’s defined this way in the text:

Mayan Haab date function

What’s odd about the modulo notation in this definition is that the thing after “mod” isn’t a divisor, it’s an interval: the half-open interval between date (inclusive) and date – 365 (exclusive).

Here’s how R&D define this interval modulus, both in Chapter 1 of Calendrical Calculations and in this ACM paper:

nmod[a..b)=def{a+(na)mod(ba)if abnif a=b

As long as the two ends of the interval aren’t identical, the answer will lie in the range [a..b). This notation is helpful in shifted modulo operations like the one in mayan-haab-on-or-before because it explicitly tells you the range of answers you’ll get. The idea is that the resulting fixed date will be anywhere from 0 (inclusive) to 365 (exclusive) days before the given date.

(The normal modulo notation, nmodm, could be written as nmod[0..m), although this doesn’t seem particularly helpful.)

Note that in mayan-haab-on-or-before, the interval goes backward, which means the divisor in the standard mod function is a negative number: –365. If you’re implementing this function in a programming language, you have to make sure that using a negative divisor in your language’s mod will give you a negative answer. This means that mod must have a floored definition. The mod function in Lisp, which R&D are using, and the % operator in Python, which I’m using as I reimplement R&D, both use the floored definition.

I mentioned earlier that the 19th month of the Haab calendar is an oddball because it has only 5 days. As it happens, today is smack in the middle of that 19th month, which is called Wayeb or Uayeb, depending on whose transliteration you use.

Another odd thing about the Haab calendar—something that computer programmers must love—is that the day numbers within a month start at 0, not 1. So Monday, which is the start of the next Haab cycle, will be 0 Pop, Pop being the name of the first Haab month.


  1. It would have been familiar if I’d read Chapter 1 carefully instead of skimming, but I was eager to get past the preliminaries quickly and figured I could always go back to Chapter 1 if necessary. Which it was. 

Netflix Raises Prices Again

Mar. 27th, 2026 11:43 pm
[syndicated profile] daringfireball_feed

Posted by John Gruber

Todd Spangler, Variety:

Under the new pricing, effective March 26 for new users and rolling out to current customers depending on their billing cycle, Netflix’s Standard plan (which has no ads and provides streaming on two devices simultaneously) is rising by $2, from $17.99 to $19.99/month. The ad-supported plan is going up a buck, from $7.99 to $8.99/month, and the top-tier Premium plan (no ads, streaming on up to four devices at once, Ultra HD and HDR) is increasing from $24.99 to $26.99/month.

I pay the full $27/month because I’d rather cancel Netflix than watch ads, and I suspect I’d notice the difference between 4K and 1080p. But also because money runs through my fingers like water.

[syndicated profile] unsung_feed

Posted by Marcin Wichary

An excellent 17-minute video from The Art Of Storytelling that analyzes the now-infamous 2021 Mark Zuckerberg Metaverse introduction video:

What I liked about it is that the author goes beyond cheap shots and deeper into both storytelling aspects (drawing from his experience)…

Now, as you can tell, the big problem with the design and execution of this video is that the producers failed to recognize the importance of point of view in telling this story. Now, perspective is already very important in any film, but it’s doubly important in a film for which one’s point of view in reality is also the subject. But this failure is present even in some of the more mundane parts of the film like the interviews that Mark does with various meta staff members. Now, as it’s plain to see, these are not real interviews. They’re fully scripted and staged – again, a classic mistake in corporate film. You can even tell that they’re not looking at each other. They’re clearly reading from a teleprompter. Yikes.

Of course, the entire premise of an interview is that two people are speaking candidly. So watching an obviously fake interview can be deeply unsettling as the speakers try to act out natural conversation and inevitably fail. This is why so many people in this video, including Mark, seem to not know what to do with their hands while speaking. It’s because they’ve been told to act naturally in a social situation that does not normally exist.

…and the meaning of these kinds of propaganda-esque announcements:

They are joined by some friends who are calling from Soho to tell them about some cool augmented reality street art that they’ve just discovered. […] And with a wave of his hand, Mark teleports the artwork into his spaceship so that he can appreciate it for himself, thus extracting this street art from any sense of place and context, which is the point of street art. I know this might sound like a nitpick, but I think it’s just worth lingering on the fact that, you know, in this high concept tech demo about how this technology will empower people to appreciate art in new ways. Nobody paused to ask what the social and cultural function of street art actually is.

The entire introduction video comes across as thoughtless and careless – “It’s not a product launch or even a demo. It’s just a cartoon about the world Mark Zuckerberg is telling you that you will one day live in.” – and some of the observations here will be relevant to other things, even in other mediums: UI redesign minisites, the font announcements articles, rebrand unveils, and so on.

I would love similar analyses of Apple’s stuff – not just the most obvious parallel which would be the 1987 Knowledge Navigator vision video, but some of the more recent scripted virtual keynotes, too.

★ Apple Giveth, Apple Taketh Away

Mar. 27th, 2026 08:46 pm
[syndicated profile] daringfireball_feed

Posted by John Gruber

The Good News First

Just this week I wrote about a hidden defaults preference you can set to turn off most of the insipid menu item icons in most of Apple’s first-party apps in MacOS 26 Tahoe. I bemoaned the fact that Safari — generally an exemplar of what makes a great Mac app a great Mac app — generally ignored this setting, leaving most of its menu item icons in place. I am delighted to report that that’s fixed in MacOS 26.4. With the preference set to hide these icons, Safari now only shows a handful.

Here’s a link to the screenshot of the old before/after, taken on MacOS 26.3.2. Boo hiss. Here’s the new before/after, taken on MacOS 26.4:

Screenshot of Safari's File menu on MacOS 26.3 Tahoe, before and after changing the hidden `NSMenuEnableActionImages` preference. In the before screenshot, every menu item has an icon. In the after image, the only items with an icon are New Empty Tab Group, New Tab Group with 2 Tabs, Delete Tab Group, Add to Dock…, and Import From Browser.

In Tahoe 26.3 (and presumably, earlier versions of Tahoe), 16 of 19 menu items in Safari’s File menu still showed an icon with this setting enabled. In 26.4, only 5 of 19 do.1 The rest of Safari’s other menus have been updated similarly, and look so much better for it.

It’s interesting to me that Safari was updated to support this hidden preference in 26.4. I take it as a sign that there’s a contingent within Apple (or at least within the Safari team) that dislikes these menu item icons enough to notice that Safari wasn’t previously recognizing this preference setting. (And I further take it as a sign that within Apple’s engineering ranks, the existence of this defaults setting is widely known.) Keep hope alive.

Now the Bad News

Another recent Tahoe-related tip I’ve been writing about was using a device management profile to block the prompts in System Settings → General → Software Update to “upgrade” from MacOS 15 Sequoia to 26 Tahoe. I first wrote about it a month ago, linking to a post from Rob Griffiths. I then wrote about it again, just this week, linking to a YouTube video from Mr. Macintosh.

Ever since this technique started making the rounds, there was widespread commentary that it was taking advantage of a bug, not a feature, in MacOS 15 Sequoia. The 90-day “deferral” period to block the Tahoe update prompts was supposed to be from the date of the Tahoe major release (26.0), not from the most recent minor release. Welp, with this week’s release of MacOS 15.7.5, this bug is fixed, and Tahoe shows up in the Software Update panel in System Settings even if you have one of these device management profiles installed. Alas.

All is not lost, however. The same video from Mr. Macintosh shows a second, slightly less elegant way to banish all signs of Tahoe in Software Update (just after the 9:00 mark). The trick is to register your Mac for the MacOS Sequoia Public Beta updates (or the developer betas). This blocks all signs of Tahoe. You don’t actually have to install any future betas of Sequoia (at the moment, there are none available). Just make sure you have Automatic Updates disabled too. I’d rather risk inadvertently installing a public beta of 15.8 Sequoia than inadvertently “upgrading” to Tahoe.


  1. In my article earlier this week, my screenshots showed only 18 menu items in Safari’s File menu, not 19. That’s because I took those screenshots on my review unit MacBook Neo, which I’m running in near-default state. Safari’s File → Import From Browser submenu appears in the File menu if and only if you have certain third-party web browsers installed on your system. On my MacBook Neo review unit, I don’t have any third-party browsers installed, so Safari omits this menu item. I snapped today’s screenshots from a different Tahoe machine that has Firefox installed. ↩︎

Got your back, pt. 4

Mar. 27th, 2026 07:20 pm
[syndicated profile] unsung_feed

Posted by Marcin Wichary

Connecting to public wi-fi networks with their captive portals is always a bit of a wonky proposition, and nothing makes public wi-fi wonkier than using it on a plane.

I believe that the resurgence of https made things harder – if the captive portal doesn’t kick in, no secure traffic can happen – and over time I just started remembering that “captive.apple.com” is a reliable HTTP-only destination to visit.

But I noticed this week that United’s onboard wi-fi network is called “Unitedwifi.com” as a reminder where to go once you are connected, to avoid that problem. I thought this was a nice touch.

[syndicated profile] deusexmachinatio_feed

Posted by Andrea Phillips

We’ve been trained for decades to believe that computers are always right. That computers are not capable of making mistakes. If there’s a mistake with a computer involved, it’s always the result of some human action: someone typed the wrong number, someone clicked the wrong button, someone used the wrong file. 

We are used to computers always doing exactly what we told them to do. (It’s just that sometimes, what we told them to do and what we THOUGHT we told them to do aren’t the same thing.) Somewhere, upstream, if the computer is wrong, it’s your fault.

So it’s not surprising, in a grand sociological way, that we’re struggling with the onset of a computer-based tool in which making mistakes isn’t just a one-time thing, it’s a mathematical inevitability based on how these tools work. OpenAI has said so itself.

You may or may not have heard the word “hallucinations” in the context of AI. A hallucination is when an LLM makes stuff up that isn’t true. But this isn’t the result of something going wrong somewhere in the circuitry. This is how the AI does everything — again, we’re generating sequences of words based on how statistically likely they are to appear close to each other and in which order. The LLM processes your prompt, and sometimes it will be right, and sometimes it won’t be, and that’s the gamble you’re taking. It’s just not as obvious as something simpler doing the same thing, like say a Magic 8-Ball.

An LLM is a marvel of engineering, it’s a miracle that it works as well as it does. Truly a triumph of technology. It’s really very good! But it’s not good enough, because it is not thinking.

Every single thing an LLM tells you is something it just kind of made up from nothing; it’s just that it has an enormous body of plausible things to tell you.

But we’re so used to reflexively trusting what the screen tells us. It has access to all of human knowledge, right? And look, most of the time, it’s pretty good, right?

Pretty Good Isn’t Good Enough

Would you use a lawyer who just makes up case citations? No? What if it’s only half the cases? What if it’s only one in four? One in a hundred? If you know that sometimes a lawyer is going to make stuff up and put it in your filings, would you ever use that lawyer at all?

Do you think this is hyperbolic or just hypothetical? Well. Here’s a lawyer being sanctioned for filing an AI-assisted brief with false citations in New York in 2023. A government lawyer in Texas in 2024. Oregon in 2025. Here’s one from 2026.

You know what, just look over the headlines yourself. Consistently, for years now, lawyers have been using AI to do their work, and the AI has made shit up, and then there’s been some trouble. 

Do you consider that an acceptable risk for your legal proceedings?

Okay, forget lawyering. Would you use someone to listen to recordings and write transcriptions of them for you if it made stuff up to fill in pauses in a conversation or in a sentence? How about if that stuff was super violent and racist?

Whisper, an OpenAI product widely used as medical transcription software, will hallucinate racist and violent content and imply drug use where no such words were spoken — definitely not the sort of thing you want going into a medical record incorrectly! From the study, Hallucinated content appears in about 1% of transcriptions, and 38% of those hallucinations are what the study considers "harmful."

An average primary care doctor sees about 20 patients a day, so that’s about one hallucination a week, and one or two a month that are harmful.

Would you hire a human being with the knowledge that they would EVER randomly insert a little fanfic of the patient threatening to murder the doctor? Even just once a month? 

Humans will make mistakes, too. But the mistakes a human being will make are orders of magnitude less severity. That’s one of the reasons that AI hallucinations throw us for a loop; not just that they’re wrong, but they’re wrong in ways that a human could never be.

AI tools keep being pushed into high-stakes situations where judgement and common sense matter. But they have neither of these things.

AI tools have brought down AWS at least twice. That we know of. AWS — that’s Amazon Web Services — is the service that powers the backbone of the modern internet as we know it, so in a sense, if AWS goes down, so does most of the internet. 

I personally think if these AI systems were human, their asses would have been fired already, if they’d even gotten past the “checking your references” part of the hiring process. And yet there seems to be a push to tolerate poor performance as a necessary evil on the way to… something?

It is perpetually shocking to me how dedicated companies and individuals are to continuing use of AI systems that have fucked up in astonishing and inhuman ways. 

LLMs have given us a Chicago Sun Times summer reading list full of books that don’t actually exist, security AIs have mistaken Doritos and a clarinet for guns, LLMs have deleted all of someone’s email, or all of their hard drive, or over two years of  their company’s work. People using a chatbot as a companion for emotional support have been encouraged to commit suicide. (actually the Wikipedia page on deaths caused by chatbots is extremely disturbing.)

These are just the ones that make the news. Imagine all the times that didn’t happen to catch a reporter’s eye.

One of my all-time favorites: Microsoft Excel has added an AI tool and warned you not to use it “for any task that requires accuracy or reproducibility.” Accuracy. And. Reproducibility.

You know, the core thing we expect computers to always do.

I lie awake at night sometimes worrying about the fact that that someone out there is probably trying to get AI into the software we use for situations where any level of error is unacceptable, like banking. Or air traffic control.

Why Does AI Make These Mistakes?

A chatbot comes off like an amiable, helpful, thoughtful person who is here to make your life better. But here are some things AI is not doing when it is generating an answer for you:

  • Performing arithmetic

  • Checking reference material 

  • Consulting a doctor

  • Assessing statements for truth

  • Judging whether it needs information it doesn’t have

It's ironic to me that some people think their generative AI is an actual entity with consciousness who understands what they're talking about. Sometimes they’ll cite conversations with a chatbot where the bot tells them its thoughts and feelings! Where it expresses needs and desires!

I mean, of course the system knows how to have a conversation as if it were a sentient intelligence. There's an enormous body of fiction featuring exactly this thing that goes back decades. We've been imagining it for far longer than we’ve been able to do it.

But whether the bot is a conscious entity is frankly not even relevant to anything but philosophic questions right now. Because if it is conscious, it isn’t conscious in a way that we understand and it isn’t using language in the same way that we do.

Regrettably I can’t remember the source of this analogy, but — imagine that you’ve been locked into a library written entirely in Thai. (This is assuming you don’t read or speak Thai, but if you do, maybe choose Zaghawa.) There are no pictures in these books, no diagrams, and no translations into any language that you do know.

Given enough time alone in this library, and with no other resources, will you be able to teach yourself Thai?

This is what the AI has access to: words, millions and millions of words and sentences. But the AI has never seen the sun, it does not know what being tired means, and it certainly hasn’t gone to medical school.

I’m willing to entertain the idea that an AI is conscious, inasmuch as I am willing to entertain that any changing knot of matter and energy may have some form of consciousness. But whatever is going on in there is not the same as us, and it’s a catastrophic error to treat it as if it were.

The AI does not have access to external reality. The AI does not understand what you’re asking it to do. The AI most certainly isn’t going to know things that don’t exist outside of the body of words it’s been trained on.

If you ask an AI for an essay about Thomas Jefferson, it’s probably going to do a bang-up job. There’s a lot of information out there about Thomas Jefferson to draw from. If you ask it how to write code to do a specific task, it’s got good odds of giving you advice (but you’re going to need to know enough on your own to check behind it, like it’s a shitty junior developer.) 

But if you ask it why your spouse is mad at you, or how many fig trees per square mile there are in Wayne County, Michigan, or if school will be closed for snow next week, or the best hotels for your road trip, or what your blood test results mean, sure, it’s going to give you an answer…. but it doesn’t really KNOW.

[syndicated profile] kottke_org_feed

Posted by Jason Kottke

Test footage from a slime simulator game made by former Epic Games employee Asher Zhu. You try to stay hydrated in the hot Tokyo summer by showering and drinking beverages from vending machines.

Profile

jazzfish: Jazz Fish: beret, sunglasses, saxophone (Default)
Tucker McKinnon

Most Popular Tags

Adventures in Mamboland

"Jazz Fish, a saxophone playing wanderer, finds himself in Mamboland at a critical phase in his life." --Howie Green, on his book Jazz Fish Zen

Yeah. That sounds about right.

Expand Cut Tags

No cut tags