Tuesday, December 27, 2022

Tgen: Traffic Generator Scripting Language

Oh no, not another little language! (Perhaps better known as a "domain-specific language".)

Yep. I wrote a little scripting language module intended to assist in the design and implementation of a useful network traffic generator. It's called "tgen" and can be found at https://github.com/fordsfords/tgen. I won't write much about it here other than to mention that it implements a simple interpreter. In fact, the simplicity of the parser might be the thing I'm most pleased about it, in terms of bang for buck. See the repo for details.

Little Languages Considered Harmful?

So yeah, I'm reasonably pleased with it. But I am also torn. Because "little languages" have both a good rap (Jon Bentley, 1986) and a bad rap (Olin Shivers, 1996).

In that second paper, Shivers complains that little languages:

  • Are usually ugly, idiosyncratic, and limited in expressiveness.
  • Basic linguistic elements such as loops, conditionals, variables, and subroutines must be reinvented and re-implemented. It is not an approach that is likely to produce a high-quality language design.
  • The designer is more interested in the task-specific aspects of his design, to the detriment of the language itself. For example, the little language often has a half-baked variable scoping discipline, weak procedural facilities, and a limited set of data types.
  • In practice, it often leads to fragile programs that rely on heuristic, error-prone parsers.
Of course, Shivers doesn't *really* think that little languages are a bad idea. He just thinks that they are usually implemented poorly, and his paper shows the right way to do it (in Scheme, a Lisp variant).

But there are some good arguments against developing little languages at all, even if implemented well. At my first job out of college, I wrote a little language to help in the implementation of a menu system. The menus were tedious and error-prone to write, and the little language improved my productivity. I was proud of it. An older and wiser colleague gently told me that there are some fundamental problems with the idea. His reasoning was as follows:

  • We already have a programming language that everybody knows and is rich and well-tested.
  • You've just invented a new programming language. It probably has bugs in the parser and interpreter that you'll have to find and fix. Maybe the time you spend doing that is paid for by the increased productivity in adding menus. Maybe not.
  • The new language is known by exactly one person on the planet. Someday you'll be on a different project or a different company, and we can't hire somebody who already knows it. There's an automatic learning curve.
  • Instead of writing an interpreter for a little language, you could have simply designed a good API for the functional elements of your language, and then used the base language to call those functions. Then you have all the base language features at your disposal, while still having a high level of abstraction to deal with the menus.

He was a nice guy, so while his criticism stung, he didn't make it personal, and he was tactful. I was able to see it as a learning experience. And ever since then, I've been skeptical of most little languages.

Then Why Did I Make a Little Language?

Well, first off, I *DID* create an API. So instead of writing scripts in the scripting language, you *can* write them in C/C++. I expect this to be interesting to QA engineers wanting to create automated tests that might need sophisticated usage patterns (like waiting to receive a message before sending a burst of outgoing traffic). I would not want to expand my scripting language enough to support that kind of script. So being able to write those tests in C gives me all the power of C while still giving me the high level of abstraction for sending traffic.

But also, a network traffic generator is a useful thing to be able to run interactively for ad-hoc testing or exploration. It would be annoying to have to recompile the whole tool from source each time you want to change the number or sizes of messages.

Of course, most traffic generation tools take care of that by letting you specify the number and sizes of the messages via command-line options or GUI dialogs. But most of them don't let you have a message rate that changes over time. My colleagues and I deal with bursty data. To properly test the networking software, you should be able to create bursty traffic. Send at this rate for X milliseconds, then at a much higher rate for Y milliseconds, etc. The "tgen" module lets you "shape" your data rate in non-trivial ways.

Did I get it right? My looping construct is ugly and could only be loved by an assembly language programmer. Maybe I should have made it better? Or just omitted it? Dunno. I'm open to discussion.

Anyway, I'm hoping that others will be able to take the tgen module and do something useful with it.

Sunday, December 25, 2022

Critical Program Reading (1975) - 16mm Film

 I find this film delightful: Critical Program Reading (1975) - 16mm Film

I would love to know about choices the filmmaker made. The vibe seems very 1960s; was that intentional?

I also didn't know that structured programming methods were that old. I was born in 1957. According to Wikipedia, the concept of "structured programming" was born in those years, although the term was first popularized by Dijkstra in his 1968 open letter "Goto Considered Harmful".

For some reason, I thought the "structured programming wars" were during the mid-to-late 1980s, when the old-school "spaghetti code" techniques were finally being replaced by more modern techniques. I guess I thought this because I clearly remember the "Goto Considered Harmful" Considered Harmful letter, and its replies. But the true war against spaghetti code was pretty much over by then. The battle at that point was not about if we should use descriptive identifier naming, block structure, and simple control flow. It was about whether the abolition of the goto should be absolute.

<rant read="optional">

I also remember feeling insulted by Dijkstra's On a Somewhat Disappointing Correspondence. He said that a competent professional programmer in 1987 should know the theorem of "the bounded linear search" and should be able to derive that theorem and its proof. I could not even read the theorem since I was not familiar with the notation. And none of my colleagues could either. I suspect that a small percentage of professional programmers of the day (and today also) would qualify as competent by Dijkstra's standards.

In retrospect, I do have some sympathy for Dijkstra's opinion. He knew full well that his standards did not match those of the programming profession. That's exactly what he was complaining about. He strongly felt that programmers should be grounded in the science of computer science. He wanted programmers to spend their time proving their algorithms correct, not slavishly (and inadequately) testing them. I suspect he wasn't saying that the programmers of the day were bad or stupid people, but that they were improperly educated and then released into the field prematurely. I suspect he might agree with, "You are not competent, but it's probably not your fault. It's more the fault of the university that gave you a degree and the company that hired you." Part of me that wishes that I and the rest of the world were more dedicated to rigor and depth of mastery.

But, of course, we are not. Airline pilots are not trained to design an airplane. House painters can't give you the chemical formulae of their paints. I remember when my wife had cancer, she was advised against using a surgeon who was a highly respected researcher; she should use a doctor who does hundreds of these surgeries per year. You usually want an experienced practitioner, not a theoretician.

Is the same thing true of programmers? Well, I will note that Dijkstra's program uses single-letter variables, a definite no-no in most structured programming. If he had submitted that to me as part of a job application, I doubt I would have hired him. But maybe that's because *I* am not competent. Maybe software would be much better today if we programmers met Dijkstra's standards. But there would be a heck of a lot less software out there, that's for sure. And cynical humor aside, I do rather like having a smart phone with a GPS.

</rant>

Friday, October 28, 2022

So long, Delta

Note to my technically-oriented readers (all two of you). You might want to skip this post. It is not technical in nature.

I used to use Delta Airlines a lot. Not anymore.

https://www.cnbc.com/amp/2022/10/22/delta-air-lines-settles-with-pilot-who-raised-safety-concerns.html

https://www.seattletimes.com/business/boeing-aerospace/delta-weaponized-mental-health-rules-against-a-pilot-she-fought-back/

So, am I refusing to fly Delta because they were unfair and downright evil to an employee? Heck no. Happens all the time. Sometimes employers get caught, and they have to make amends. Sometimes they don't get caught, and they get away with it. If I refused to deal with companies who misbehave, I would have to become a hermit doing subsistence farming with rocks and sticks.

No, I'm refusing because even after Delta's dirty tricks were exposed and their claims totally debunked, the manager behind it all was "...promoted to CEO of Endeavor, Delta’s regional carrier subsidiary, and senior vice president of Delta Connection, the airline’s partnership with regional carriers Skywest and Republic Airways."

DUDES! When you get caught, you're supposed to at least pretend to be sorry! And somebody has to become the sacrificial lamb. But I guess that only happens when the lamb is a low-level employee. When it's a bigwig, then better to just close ranks, settle, and pretend everything's cool. Nothing to see here, move along.

Sorry, no can do. And yes, I know, I don't have all the facts. Maybe more facts will come to light. Maybe Graham truly believed he was doing the ethical and moral thing.

What do you say, Graham? Did you genuinely believe in your heart that Petitt wasn't safe to fly? Based on her complaints about safety? Feel free to post a reply here. If it sounds credible, I'll post a retraction and fly Delta again.

Hopefully I can sleep tonight over the sound of crickets.

Tuesday, July 5, 2022

The Time I Found a Hardware Bug

As I approach age 65, I've been doing some reminiscing. And discovering that my memory is imperfect. (pause for laughter) So I'm writing down a story from earlier days in my career. The story has no real lessons that are applicable today, so don't expect to gain any useful insights. But I remember the story fondly so I don't want it to slip completely from my brain.

WARNING: this story is pretty much self-indulgent bragging. "Wow, that Steve guy sure was smart! I wonder what happened."

I think it was 1987, +/- 2 years (it was definitely prior to Siemens moving to Hoffman Estates in 1989).

The product was "Digitron", an X-ray system based on a Multibus II backplane and an 8086 (or was it 80286?) CPU board running iRMX/86. I think the CPU board was off-the-shelf from Intel, but most of the rest of the boards were custom, designed in-house.

At some point, we discovered there was a problem. We got frequent "spurious interrupts". I *think* these spurious interrupts degraded system performance to the degree that sometimes the CPU couldn't keep up with its work, resulting in a system failure. But I'm not sure -- maybe they just didn't like having the mysterious interrupts. At any rate, I worked on diagnosing it.

The CPU board used an 8259A interrupt controller chip (datasheet here or here) that supported 8 vectored interrupts. There was a specific hardware handshake between the 8259A and the CPU chip that let the 8259A tell the CPU the interrupt vector. The interrupt line is asserted and had to be held active while the handshake took place. At the end of the hardware handshake, the CPU calls the ISR, which interacts with the interrupting hardware. The ISR clears the interrupt (i.e. makes the hardware stop asserting the interrupt line) before returning.

According to the 8259A datasheet, spurious interrupts are the result of an interrupt line being asserted, but then removed, before the 8259A can complete the handshake. Essentially the chip isn't smart enough to remember which interrupt line was asserted if it went away too quickly. So the 8259A declares it "spurious" and defaults to level 7.

I don't remember how I narrowed it down, but I somehow identified the peripheral board that was responsible.

For most of the peripheral boards, there was a single source of interrupt, which used an interrupt line on the Multibus. But there was one custom board (don't remember which one) where they wanted multiple sources of interrupt, so the hardware designer included an 8259A on that board. Ideally, it would have been wired to the CPU board's 8259A in its cascade arrangement, but the Multibus didn't allow for that. So the on-board 8259A simply asserted one of the Multibus interrupt lines and left it to the software to determine the proper interrupt source. The 8259A was put in "polled mode" and ISR for the board's interrupt would read the status of the peripheral 8259A to determine which of the board's "sub-interrupts" had happened. The ISR would then call the correct handler for that sub-interrupt.

Using an analog storage scope, I was able to prove that the peripheral board's 8259A did something wrong when used in its polled mode. The peripheral board's 8259A asserted the Multibus interrupt level, which led to the CPU board properly decoding the interrupt level and invoking the ISR. The ISR then performed the polling sequence, which consisted of reading the status and then writing something to clear the interrupt. However, the scope showed that during the status read operation, while the multibus read line was asserted, the 8259A released its interrupt output. When the read completed, the 8259A re-asserted its interrupt. This "glitch" informed the CPU board's 8259A that there was another interrupt starting. Then, when the ISR cleared the interrupt, the 8259A again released its interrupt. But from the CPU board's 8259A's point of view, that "second" interrupt was not asserted long enough for it to handshake with the CPU, so it was treated as a spurious interrupt.

(Pedantic aside: although I use the word "glitch" to describe the behavior, that's not right terminology. A glitch is typically caused by a hardware race condition and would have zero width if all hardware had zero propagation delay. This wasn't a glitch because the release and re-assert of the interrupt line was tied to the bus read line. No race condition. But it resembled a glitch, so I'll keep using that word.)

HARDWARE BUG?

The polling mode of operation of the 8259A was a documented and supported use case. I consider it a bug in the chip design that it would glitch the interrupt output during the status read operation. But I didn't have the contacts within Intel to raise the issue, so I doubt any Intel engineer found out about it.

WORKAROUND

I designed a simple workaround that consisted of a chip - I think it was a triple, 3-input NAND gate, or maybe NOR, possibly open collector - wired to be an AND function. The interrupt line was active low, so by driving it with an AND, it was possible to force it to active (low). I glued the chip upside-down onto the CPU board and wire-wrapped directly to the pins. One NAND gate was used as an inverter to make another NAND gate into an AND circuit. One input to the resulting AND was driven by the interrupt line from the Multibus, and the other input was driven by an output line from a PIO chip that the CPU board came with but wasn't being used. I assume I had to cut at least one trace and solder wire-wrap wire to pads, but I don't remember the details.

The PIO output bit is normally inactive, so that when the peripheral board asserts an interrupt, the interrupt is delivered to the CPU. When the ISR starts executing, the code writes the active value to the PIO bit, which forces the AND output to stay low. Then the 8259A is polled, which glitched the multibus interrupt line, but the AND gate keeps the interrupt active, masking the glitch. Then the ISR writes a inactive to the PIO and clears the interrupt, which releases the Multibus interrupt line. No more spurious interrupt.

Kludge? Hell yes! And a hardware engineer assigned to the problem figuratively patted me on the head and said they would devise a "proper" solution to the spurious interrupt problem. After several weeks, that "proper" solution consisted of using a wire-wrap socket with its pins bent upwards so that instead of wire-wrapping directly to the chip's pins, they wire-wrapped to proper posts.

Back in those days, people didn't have a digital camera in their pocket, so I have no copy of the picture I took of the glitch. And I'm not confident that all the details above are remembered correctly. E.g. I kind of remember it was a NOR gate, but that doesn't make logical sense. Unless maybe I used all 3 gates and boolean algebra to make an AND out of NOR gates? I don't remember. But for sure the point was to mask the glitch during the execution of the ISR.

But I remember the feeling of vindication. My hardware training made me valuable over a pure software engineer.

Sunday, July 3, 2022

Math Nerd?

I just made two posts on recreational math. I'm what you might call a math nerd wannabe. I'm NOT a math nerd - I don't have a flair or the rigor required to make that claim - but I've always wished I were.

I used to read Martin Gardner in Scientific American. And I tried to enjoy it with mixed success. More recently, I subscribed to Numberphile, but finally unsubscribed when I realized I tend to lose focus about halfway through most of the videos. And 3Blue1Brown? The same but more. It's not just that I have trouble following the math (although I often do), I'm just not interested enough to try hard enough. But darn it, I wanna be! :-)

When I was very young, I aspired to be a scientist so I could invent cool things. Never mind that theoretical scientists and inventors tend to be very different kinds of people; in both cases, I don't have the knack. I think I'm more of a hobbyist who discovered that he could be paid well for his hobby. I've never invented a cool algorithm, but I've enjoyed implementing cool algorithms that real scientists have invented. I like tinkering, taking things apart to see what makes them tick, and sometimes even putting them back together.

Not that there's anything wrong with this. I've led, and continue to lead, a happy, productive, and fulfilling life. I'm reasonably well-liked and respected by my peers. I have no complaints about how life has treated me.

But I am sometimes wistful about what might have been ... being a math nerd/scientist/inventor would be pretty cool too.

Anyway, I won't be making regular posts about math ... unless I do. ;-)

Information in the Noise

Wow, a non-math nerd posting twice about math. What's that about?

Derek Muller of Veritasium posted a video about the 100 prisoners puzzle (I like "puzzle" in this context better than "riddle" or "problem"). Unlike my earlier post, I have no complaints about this video. Derek is one of the top-tier educational YouTubers, and he did a fantastic job of explaining it. (As before, I'm not going to explain it here; watch his video. Seriously, just watch it.)

So why do I feel the need to comment? I guess I feel I have a small but interesting (to me) tidbit to add.

Derek et al. describe the puzzle's "linked list" solution (my name) as giving a counter-intuitive result, and I guess I have to agree. The numbers are distributed to the boxes randomly, so how could any strategy give a prisoner a better chance of success than random selection? IT'S RANDOM!!!!!!

AN INTUITIVE UNDERSTANDING

And here's my tidbit: it's not as random as it seems. For this puzzle, the numbers are assigned randomly to boxes, without replacement. I.e., you won't find a given number in more than one box, and no number between 1 and 100 is skipped. This is obvious for the setup of the puzzle, but randomizing without replacement puts constraints on the system. Those constraints add information to the noise.

If prisoner number 13 randomly opens box 52, he knows he has a one in 100 chance of seeing his number in that box. He opens it and sees the number 1. He now knows FOR SURE that no other box has the number 1 in it. So his second random choice will have a one in 99 chance of being his number. Each choice gives some information that affects the probability of the next choice. (I.e., the samples are not independent.)

It is these constraints that lead directly to the cycles that are at the heart of the puzzle. And clever people have calculated the probability of having a cycle greater than 50 to be about 0.688. So the "linked list" strategy has ~= 0.312 probability of the prisoners being set free. That's the point of Derek's video.

Let's ruin the puzzle for a moment. Let's assign a random number between 1 and 100 to each box with replacement. It's entirely possible, even probable, that you'll have duplicates (the same number in more than one box) and skips (a number that is not in any box). One effect of this change is that the numbers will no longer necessarily be arranged in cycles. You can have many numbers NOT in a cycle. So the "linked list" solution to the puzzle doesn't improve your chances of survival over pure chance. Getting rid of the "without replacement" constraint removes the information from the noise.

This is how I get an intuitive feeling that you can have a much higher probability of success with the "linked list" solution to the original puzzle - you're taking advantage of the information that's in the noise.

WITH REPLACEMENT

What about my ruined version, where the numbers are assigned to boxes with replacement? To start with, let's calculate the probability that you get a distribution of numbers in boxes that is even possible for the prisoners to win (i.e., every number 1-100 is assigned exactly once). My probability-fu is weak, but I'll try. I think it is (100!)/(100**100) ~= 9.33e-48. Wow, that's a really low probability.

On the off chance that you get a solvable distribution, the probability of success with the linked list solution is ~= 0.312. So the total probability of success for my ruined version, WITH the linked list solution, is ~= 6.4e-43. If instead the prisoners choose their boxes randomly, then it's ~= 7.36e-73.

The prisoners had better appeal to Amnesty International.

There is no Vase

 I'm not a math nerd, so I probably shouldn't be posting on this subject. But when has a lack of expertise ever stopped me from having an opinion?

I just watched the Up and Atom video: An Infinity Paradox - How Many Balls Are In The Vase? In it, Jade describes the Ross–Littlewood paradox related to infinite pairings. I liked the video but was not satisfied with the conclusion.

I won't give the background; if you're interested in this post, go watch the video and skim the Wikipedia article. Basically, she presents the "Depends on the conditions" solution (as described in the Wikipedia article) without mentioning the "underspecified" and "ill-informed" solutions. And I guess that's an OK choice since the point of her video was to talk about infinities and pairings. But she kept returning to the question, "how many balls are there *actually*?"

Infinity math has many practical applications, especially if the infinity is related to the infinitely small. An integral is frequently described as the sum of the areas of rectangles under a curve as the width of the rectangles becomes infinitesimal - i.e., approaches zero. This gives a mathematically precise calculation of the area. Integrals are a fundamental tool for any number of scientific and engineering fields.

But remember that math is just a way of modeling reality. It is not *really* reality.

There is no such thing as an infinitesimal anything. There is a minimum distance, a minimum time, and the uncertainty principle guarantees that even as you approach the minimum in one measure, your ability to know a different measure decreases. When the numbers become small enough, the math of the infinitesimal stops being an accurate model of reality, at least not in the initially intuitive ways.

But they are still useful for real-world situations. Consider the paradox of Achilles and the tortoise, one of Zeno's paradoxes. (Again, go read it if you don't already know it.) The apparent paradox is that Achilles can never catch up to the tortoise, even though we know through common experience that he will catch up with and pass the tortoise. The power of infinity math is that we can model it and calculate the exact time he passes the tortoise. The model will match reality ... unless an eagle swoops down, grabs the tortoise, and carries it across the finish line. :-)

But models can break down, even without eagles, and a common way for infinity models to break down is if they don't converge. 1/2 plus 1/4 plus 1/8 plus 1/16 ... converges on a value (1). As you add more and more terms, it approaches a value that it will never exceed with a finite number of terms. So we say that the sum of the *infinite* series is *equal* to the limit value, 1 in this case. But what about 1/2 plus 1/3 plus 1/4 plus 1/5, etc.? This infinite series does NOT converge. It grows without bound. And therefore, we cannot claim that it "equals" anything at infinity. We could claim that the sum equals infinity, but this is not well defined since infinity is not a number.

Here's a similar train of thought. What is 1/0? If you draw a graph of 1/X, you will see the value grow larger and larger as X approaches 0. So 1/0 must be infinity. What is 0 * (1/0)? Again, if you graph 0 * (1/X), you will see a horizontal line stuck at zero as X approaches 0. So I guess that 0 * (1/0) equals 0, right? Not so fast. Let's graph X * (1/X). That is a horizontal line stuck at 1. So as X approaches 0, X * (1/X) equals 1. So 0 * 1/0 equals 1. WHICH ONE IS RIGHT???????? What *really* is 0 * (1/0)?

The answer is that the problem is ill-formed. The 1/X term does not converge. The value of 1/0 is not "equal to infinity", it is undefined. My train of thought above is similar to the fallacious "proof" that 1 equals 2. And it seems to me that the "proof" that the number of balls in the vase can be any number you want it to be is another mathematical fallacy.

The only way to model the original vase problems is to draw a graph of the number of balls in the vase over time. Even in the case where you remove the balls sequentially starting at 1, you will see the number of balls growing without bound as time proceeds. Since this function does not converge, you can't say that it "equals" anything at the end. But it tends towards infinity, so claiming that it equals some finite value *at* the end is another example of an invalid application of math to reality.

But I shouldn't complain. Jade used the "paradox" to produce an engaging video teaching about pairing elements in infinite sets. And she did a good job of that.

Wednesday, May 4, 2022

CC0 vs GPL

I've been writing little bits and pieces of my own code for many years now. And I've been releasing it as CC0 ("public domain"; see below). I've received a bit of criticism for it, and I guess I wanted to talk about it.

I like to write software. And I like it when other people benefit from my software. But I don't write end-user software, so the only people who benefit from my code are other programmers. But that's fine, I like a lot of programmers, so it's all good.

There are different ways I could offer my software. Much open-source software is available under a BSD license, an Apache license, or an MIT license. These differ in ways that are probably important to legal types, but for the most part, they mean that you can use the code for pretty much any purpose as long as you give proper attribution to the original source. So if I write a cool program and use some BSD code, I need to state my usage of that code somewhere in my program's documentation.

So maybe I should do that. After all, if I put in the effort to write the code, shouldn't I get the credit?

Yeah, that and a sawbuck will get me a cup of coffee. I don't think those attributions are worth much more than ego-boosting, and I guess my programmer ego doesn't need that boost.

With the exception of the GNU Public License (GPL), I don't think most open source ego-boosting licenses buy me anything that I particularly want. And they do introduce a barrier to people using my code. I've seen other people's code that I've wanted but decided not to use because of the attribution requirement. I don't want the attributions cluttering up my documentation, and adding licensing complications to anybody who wants to use my code. (For example, I was using somebody else's getopt module for a while, but realized I wasn't giving proper attribution, so I wrote my own.)

But what about GNU?

The GPL is a different beast. It is intended to be *restrictive*. It puts rules and requirements for the use of the code. It places obligations on the programmers. The stated goal of these restrictions is to promote freedom.

But I don't think that is really the point of GPL. I think the real point of GPL is to let certain programmers feel clean. These are programmers who believe that proprietary software is evil, and by extension, any programmer who supports proprietary software is also evil. So ignoring that I write proprietary software for a living, my CC0 software could provide a small measure of support for other proprietary software companies, making their jobs easier. And that makes me evil. Not Hitler-level evil, but at least a little bit evil.

If I license my code under GPLv3, it will provide the maximum protection possible for my open-source code to not support a proprietary system. And that might let me sleep better at night, knowing that I'm not evil.

Maybe somebody can tell me where I'm wrong on this. Besides letting programmers feel clean, what other benefit does GPL provide that other licenses (including CC0) don't?

I've read through Richard Stallman's "Why Open Source Misses the Point of Free Software" a few times, and he keeps coming back to ethics, the difference between right and wrong. Some quotes:

  • "The free software movement campaigns for freedom for the users of computing; it is a movement for freedom and justice."
  • "These freedoms are vitally important. They are essential, not just for the individual users' sake, but for society as a whole because they promote social solidarity—that is, sharing and cooperation."
  • "For the free software movement, free software is an ethical imperative..."
  • "For the free software movement, however, nonfree software is a social problem..."
I wonder what other things a free software advocate might believe. Is it evil to have secret recipes? Should Coke's secret formula be published? If I take a recipe that somebody puts on youtube and I make an improvement and use the modified recipe to make money, am I evil? What if I give attribution, saying that it was inspired by so-and-so's recipe, but I won't reveal my improvement? Still evil?

How about violin makers that have secret methods to get a good sound? Evil?

I am, by my nature, sympathetic to saying yes to all of those. I want the world to cooperate, not compete. I used to call myself a communist, believing that there should be no private property, and that we should live according to, "From each according to his ability, to each according to his needs". And I guess I still do believe that, in the same way that I believe we should put an end to war, cruelty, apathy, hatred, disease, hunger, and all the other social and cultural evils.

Oh, and entropy. We need to get rid of that too.

But none of them are possible, because ... physics? (That's a different subject for a different day.)

But maybe losing my youthful idealism is nothing to feel good about. Instead of throwing up my hands and saying it's impossible to do all those things, maybe I should pick one of them and do my best to improve the world. Perhaps the free software advocates have done exactly that. They can't take on all the social and cultural ills, so they picked one in which they could make a difference.

But free software? That's the one they decided was worth investing their altruism?

Free software advocates are always quick to point out that they don't mean "free" as in "zero cost". They are referring to freedoms - mostly the freedom to run a modified version of a program, which is a freedom that is meaningless to the vast majority of humanity. I would say that low-cost software is a much more powerful social good. GPL software promotes that, but so do the other popular open-source licenses. (And so does CC0).

So anyway, I guess I'm not a free software advocate (big surprise). I'll stick with CC0 for my code.

What is CC0

The CC0 license attempts to codify the concept of "public domain". The problem with just saying "public domain" is that the term does not have a universally agreed-upon definition, especially legally. So CC0 is designed to approximate what we think of as public domain.

Tuesday, February 15, 2022

Pathological cases

Jacob Kaplan-Moss said something wonderful yesterday:

Designing a human process around pathological cases leads to processes that are themselves pathological.

This really resonated with me.

Not much to add, just wanted to share.

Thursday, February 3, 2022

Nice catch, Grammarly

 I was writing an email and accidentally left out a word. I meant to write, "I've asked the team for blah...". But I accidentally omitted "asked", so it just said, "I've the team for blah...".

Grammarly flagged "I've", suggesting "I have". Since my brain still couldn't see my mistake, I thought it was complaining about "I've asked the team...". I was about to dismiss, but decided to click the "learn more" link. It said that, except in British English, using the contraction "I've" to express possession sounds unnatural or affected. As in: "Incorrect: I've a new car".

Ah HAH! That triggered me to notice the missing word "asked". I put it in, and Grammarly was happy. I consider this a good catch. Sure, it misdiagnosed the problem, but it knew it was a problem.

Thanks, Grammarly!


Wednesday, January 5, 2022

Bash Process Substitution

I generally don't like surprises. I'm not a surprise kind of guy. If you decide you don't like me and want to make me feel miserable, just throw me a surprise party.

But there is one kind of surprise that I REALLY like. It's learning something new ... the sort of thing that makes you say, "how did I not learn this years ago???"

Let's say you want the standard output of one command to serve as the input to another command. On day one, a Unix shell beginner might use file redirection:

$ ls >ls_output.tmp
$ grep myfile <ls_output.tmp
$ rm ls_output.tmp

On day two, they will learn about the pipe:

$ ls | grep myfile

This is more concise, doesn't leave garbage, and runs faster.

But what about cases where the second program doesn't take its input from STDIN? For example, let's say you have two directories with very similar lists of files, but you want to know if there are any files in one that aren't in the other.

$ ls -1 dir1 >dir1_output.tmp
$ ls -1 dir2 >dir2_output.tmp
$ diff dir1_ouptut.tmp dir2_output.tmp
$ rm dir[12]_output.tmp

So much for conciseness, garbage, and speed.

But, today I learned about Process Substitution:

$ diff <(ls -1 dir1) <(ls -1 dir2)

This basically creates two pipes, gives them names, and passes the pipe names as command-line parameters of the diff command. I HAVE WANTED THIS FOR DECADES!!!

And just for fun, let's see what those named pipes are named:

$ echo <(ls -l dir1) <(ls -1 dir2)
/dev/fd/63 /dev/fd/62

COOL!

(Note that echo doesn't actually read the pipes.)


VARIATION 1 - OUTPUT

The "cmda <(cmdb)" construct is for cmda getting its input from the output of cmdb. What about the other way around? I.e., what if cmda wants to write its output, not to STDOUT, but to a named file, and you want that output to be the standard input of cmdb? I'm having trouble thinking here of a useful example, but here's a not-useful example:

cp file1 >(grep xyz)

I say this isn't useful because why use the "cp" command? Why not:

cat file1 | grep xyz

Or better yet:

grep xyz file1

Most shell commands write their primary output to STDOUT. I can think of some examples that don't, like giving an output file to tcpdump, or the object code out of gcc, but I can't imagine wanting to pipe that into another command.

If you can think of a good use case, let me know.


VARIATION 2 - REDIRECTING STANDARD I/O

Here's something that I have occasionally wanted to do. Pipe a command's STDOUT to one command, and STDERR to a different command. Here's a contrived non-pipe example:

process_foo 2>err.tmp | format_foo >foo.txt
alert_operator <err.tmp
rm err.tmp

You could re-write this as:

process_foo > >(format_foo >foo.txt) 2> >(alert_operator)

Note the space between the two ">" characters - this is needed. Without the space, ">>" is treated as the append redirection.

Sorry for the contrived example. I know I've wanted this a few times in the past, but I can't remember why.


And for completeness, you can also redirect STDIN:

cat < <(echo hi)

But this is the same as:

echo hi | cat

I can't think of a good use for the "< <(cmd)" construct. Let me know if you can.


EDIT:

I'm always amused when I learn something new and pretty quickly come up with a good use for it. I had some files containing a mix of latency values and some log messages. I wanted to "paste" the different files into a single file with multiple columns to produce a .CSV. But the log messages were getting in the way.

paste -d "," <(grep "^[0-9]" file1) <(grep "^[0-9]" file2) ... >file.csv

Done! :-)