Saturday, April 19, 2025

Taming the News Cycle: An AI Experiment

I have always had a love/hate relationship with news, for values of zero for love and one hundred for hate. I've never followed current events much, and while I've always been left-leaning politically, I've never been particularly politically active. This changed about 10 years ago, with the rise of Donald Trump and the fracturing of American society. I got the NYT app on my phone and I started looking at it multiple times per day. (Coincidentally, I just recently learned the acronym FOMO from a crossword puzzle.)

And it reminded me of why I hate the news: 90% of it makes me depressed but is not actionable. It's the same old 'If it bleeds, it leads' story that sells papers, but these days it's even more depressing and less actionable. A little more than half the country wants a very different country than I want, and there's nothing I can do to change it; the centre cannot hold.

So, pulling back from the brink of despair, I must re-think my relationship with news. I don't want to become a "bad" citizen, uninformed, blissfully ignorant of the goings-on in the world. But I also need to preserve my emotional energy. Pull back. Stop news looping.

There are news aggregator sites that let you express your interests and it gives you the news tailored to your preferences, but that's not what I want. I don't want to be in an echo chamber that only reinforces my outlook. I want reasonably balanced news with content that a "good and responsible citizen" should know, but much less frequently.

SOLUTION: AI NEWS CURATOR

So I'm conducting an experiment. I'm making Claude.ai into my news curator. It now has web search capabilities and can provide summaries. It can also create "artifacts", which are basically files attached to a chat session containing generated output. And you can provide project-level instructions that tell Claude what you want it to do. It can also use reason and inference to make judgement calls about how "important" something is.

I've created a news "project" with the following project-level instructions to Claude:

This project is for me to keep up with important news. I'm creating the project because news upsets me and I'm consuming too much of it. I want to avoid as much "unnecessary" news as possible. You are going to be my news curator.

One thing I want to avoid is the echo chamber effect. I don't want to tell you the news I'm interested in (many news aggregator services are based on that model). Quite the contrary - in my perfect world I won't hear ANY news. But that's not responsible. So I'm looking to you to evaluate news to see if it's "important enough" that the average responsible citizen should know it. It's like medicine - I don't like it, but it's good for me.

This means you need to cast a wide net. I don't want "one important story from each of three categories." I want all important stories from all categories.

PROCEDURE

Each Monday I will create a new chat session. I will prompt you, and you will do two things:

1. Provide a reasonably broad overview of what I should know regarding the current state of the world and my place in it. This goes into a date-stamped artifact that I won't look at.

2. Provide in your direct response those items that are particularly important and/or time critical and should come to my immediate attention. Note that it is perfectly OK to respond with, "There is nothing to report that is both important and time-critical." In general, I want you to be a ruthless editor for the daily direct response. Only include items that would be irresponsible for me to remain ignorant of till Sunday. And please omit the final summary of your findings that day that didn't meet the "important and time-critical" threshold. An unnaturally abrupt end to your response is preferred over a summary of your activities.

Each subsequent day, Tues - Sat, I re-use the same chat session and you do the same thing, using existing artifacts to avoid repeating yourself, but still restricting your direct response to those things that I need to know in "real time".

Sunday morning will be different. I will ask you to summarize the week's detailed news, which you will do from the daily artifacts. I can also read the individual artifact files to get more detail.

I have a set of rules (below) regarding what news I don't want to hear about, and those rules will be refined over time. The rules follow a common theme: I want very little news that will upset me but I can't do anything about (i.e. is non-actionable). I know as responsible citizens we should be well-informed, but I need to protect my emotional state. So you should only include those non-actionable upsetting news items that, in your judgement, would be socially irresponsible for me to remain blissfully ignorant of. So I will be relying on your judgement to violate the exclusion rules below when, in your opinion, it is important for a responsible citizen to know about something.

EXCLUSION RULES

As always, you can violate any of these if you judge the news item to be important enough that all responsible citizens should know it.

1. Omit items from entertainment news.

2. Omit items from science news (I get that from a different source).

3. Omit items from international news related to foreign relations. For example, I don't need to know about trade wars.

4. Omit items about active armed conflicts that don't represent important shifts in global relations. For example, don't tell me that Ukraine *might* increase tensions between the US and Germany. Do tell me if somebody joins or drops out of NATO.

5. Don't tell me about shifts in the US economy. Most of those shouldn't be acted on anyway.

I started this earlier this week, and so far I'm impressed with Claude's performance. Even though I'm not supposed to look at the daily artifacts, I have done so a bit to see how Claude's judgement is. The first day raised too many issues in the direct response, i.e. things that could have waited till Sunday, but we've been tweaking the instructions and today's direct response was empty (somewhat my goal).

Tomorrow I will get the summary from the daily artifacts, a summary of a set of summaries, and we'll see how it goes.

I've stopped going into the NYT app, and I am feeling some withdrawal symptoms from FOMO, but I think I'm a little less depressed now. Fingers crossed.

(P.S. - thanks to Claude.ai for the title suggestion. My first try, "FOMO Solution: AI", just didn't please me.)

ABDICATING JUDGEMENT - A PHILOSOPHICAL LOOK

You'll note that I'm handing Claude a big responsibility. I'm asking it to decide if a news item is "important enough". Are modern LLMs up to that task?

Well, that's part of what this experiment is all about. I'm curious to see how it does. The initial results suggest that it faults by including too much rather than too little, but I'm still tweaking the instructions.

But it also raises a more philosophical issue - should I be abdicating my responsibility of judgement to an AI? Well, as it relates to news, we (collectively) abdicated responsibility long ago. News sources hire editors to make those judgement calls for us. So we, individually, abdicated that long ago. Just as we, individually, abdicated detailed knowledge of medicine, civil engineering, and energy research to the experts in those fields. Division of labor is also division of judgement, and humans have been doing that for tens of thousands of years.

But maybe this isn't an individual question. It's a species question. Should the human species abdicate judgement to machines? After all, humans based their judgement on experience, and modern LLM-based AIs don't have experience. They have training data. However, I would argue that LLMs are benefiting from human experience. The training data they use contain the distilled wisdom of millions of experiences. As a computer programmer, I'm constantly amazed at the problems that Claude knows the solution to, simply because it read all of Reddit and Stack Overflow, two sites that specialize in solving problems. Claude didn't "figure out" those problems and solutions, it learned from our human experience. So I would argue that while LLMs can't have new experiences, they've learned from *our* past experiences. It's not the same, but at the bottom line, it seems to work pretty well.

(Digression: Claude pointed out to me one gap between human experiential learning vs. AI training - post-training learning. Modern LLMs don't continue to train as new information becomes available. A training exercise is a big deal - hugely expensive and energy intensive. Each use of the LLM does not give the LLM a chance to learn. However, my implementation allows for a feedback loop of sorts. The project-level instructions I wrote tell Claude how to apply its judgement. If I see an issue, I tweak the instructions to fine-tune Claude's behavior. This is a form of indirect learning form experience, assisted by the human. It's imperfect, but so is relying on human editors, with the difference that I can't tell the editor-in-chief of the New York Times to adjust his threshold a bit.)

Also, we already have abdicated some judgement to our machines. Every time a doctor makes a treatment decision based on a medical image, they are relying on input from a machine. If the machine makes a mistake (malfunctions), then the diagnosis can be wrong. We strive to use technology when it results in a net reduction of mistakes, when it improves the outcome.

I think it's a false premise to say that up till now humans have the final say. We get input from our machines, but we make the final decisions. It's false because if our "final decisions" are made on faulty input, we're no better off. Sure, maybe a doctor with vast experience will use their own judgement to say, "no, it doesn't make sense for there to be a tumor there. Let's get confirmation." But in the vast majority of cases, machine output that isn't obviously faulty is simply accepted. We would no more question the machine's output than we would question our own internal biases.

From a practical sense, me abdicating my news importance judgement to an LLM is of no particular importance. From a philosophical point of view, every time humans have used machines to help them achieve their goals, they do a better job of achieving their goals. I don't see AIs as fundamentally different. I see this as less of an abdication and more of a collaboration.

No comments: