Wednesday, January 25, 2023

mkbin: make binary data

We've all been there. We want a set of bytes containing some specific binary data to feed into some program. In my case, it's often a network socket that I want to push the bytes into, but it could be a data file, an RS-232 port, etc.

I've written this program at least twice before in my career, but that was long before there was a GitHub, so who knows where that code is? Yesterday I wanted to push a specific byte sequence to a network socket, and I just didn't feel like using a hex editor to poke the bytes into a file.

So I wrote it once again: mkbin.pl

It's a "little language" (oh boy) that lets you specify the binary data in hex or decimal, as 8, 16, 32, or 64-bit integers, big or little endian (or a mix of the two), or ASCII. 


EXAMPLE WITH HTTP

For example, let's say I want to get the home page from yahoo using mkbin.pl interactively from a shell prompt:

./mkbin.pl | nc yahoo.com 80
"GET / HTTP/1.1" 0x0d0a
"Host: yahoo.com" 0x0d0a0d0a

HTTP/1.1 301 Moved Permanently
...

(I typed the yellow, yahoo server returned the ... blue? Cyan? I'm not good with colors.) Naturally, yahoo wants me to use https, so it is redirecting me. But this is just an example.

Here's a shell script that does the same thing with some comments added:

#!/bin/sh
./mkbin.pl <<__EOF__ | nc yahoo.com 80
"GET / HTTP/1.1" 0x0d0a       # Get request
"Host: yahoo.com" 0x0d0a0d0a  # double cr/lf ends HTTP request
__EOF__

Sometimes it's just easier to echo the commands into mkbin to create a one-liner:

echo '"GET / HTTP/1.1" 0x0d0a "Host: yahoo.com" 0x0d0a0d0a' |
  ./mkbin.pl | nc yahoo.com 80

(Note the use of single quotes to ensure that the double quotes aren't stripped by the shell; the mkbin.pl program needs the double quotes.)


INTEGERS WITH ENDIAN

So far, we've seen commands for inputting ASCII and arbitrary hex bytes. Here are two 16-bit integers with the value 13, first specified in decimal, then in hex:

$ echo '16d13 16xd' | ./mkbin.pl | od -tx1
0000000 00 0d 00 0d
0000004

As you can see, it defaults to big endian.

Here are two 32-bit integer value 13, first in little endian, then in big endian:

$ echo '!endian=0 32d13 !endian=1 32xd' | ./mkbin.pl | od -tx1
0000000 0d 00 00 00 00 00 00 0d
0000010

You can also do 64-bit integers (64d13 64xd) and even 8-bit integers (8d13 8xd).


ARBITRARY BYTES

The construct I used earlier with 0x0d0a encodes an arbitrary series of bytes of any desired length. Note that it must have an even number of hex digits. I.e. 0xd is not valid, even though 8xd is.


STRINGS WITH ESCAPES

Finally, be aware that the string construct does not have fancy C-like escapes, like "\x0d". The backslash only escapes the next character for inclusion and is only useful for including a double quote or a backslash into the string. For example:

$ echo '"I say, \"Hi\\hello.\"" 0x0a' | ./mkbin.pl | od -tx1
0000000 49 20 73 61 79 2c 20 22 48 69 5c 68 65 6c 6c 6f
0000020 2e 22 0a
0000023
$ echo '"I say, \"Hi\\hello.\"" 0x0a' | ./mkbin.pl
I say, "Hi\hello."



Thursday, January 19, 2023

Nick Cave has Nothing to Fear

Nick Cave doesn't like ChatGPT.

Somebody asked Chat to compose a song in the style of Nick Cave. Nick didn't like it, calling it "replication as travesty" among other things.

I think Nick and other successful singer-songwriters have nothing to fear.

First of all, replication is nothing new. Beginner musicians imitate the styles of their favorite artists all the time. The good ones eventually find their own voices. But what about the wannabes that just get REALLY good at emulating their hero's style? Think "tribute band". Nick doesn't fear them. Nick Cave fans will buy Nick's music, even if a tribute band sounds just like him. Having that tribute band use an AI doesn't change that.

It might be a little dicier if somebody uses an AI to compose a song/painting/whatever in the style of a long-dead artist and claims that it is a newly-found genuine creation of the original artist. This is also nothing new. It's called forgery, and people have been dealing with that for as long as there has been an art market. I can't see reducing the cost of entry into the forgery profession will lead to a lot more fraud being perpetrated. If anything, it will make consumers even more suspicious of unlikely "discoveries", which is probably a good thing.

Nick's primary complaint seems to be that good music that touches a human's heart can only come from another human heart (usually a tortured one). Bad news, Nick. There's plenty of successful music out there that does not come from the creator's heart, and has no intention of touching the listener's heart. In my youth, they called it "bubble gum music". Cheery, maybe danceable, maybe a catchy riff that you find yourself humming. Think Monkeys or TV commercials. I suspect Nick wouldn't care much one way or the other if that music started coming from AIs instead of good-but-not-great-musicians-who-need-to-eat.

Is serious music in danger of being AI generated?

Well ... maybe? There are plenty of successful singers who are not songwriters. They mostly get their songs from non-performing songwriters. I'm sure that some of those songwriters are tortured artists whose blood and sweat come out in their songs. A lot of others are fairly non-creative mediocre songwriters who figured out a formula and got good at imitation. Give an uninspired song to a really successful singer, and you can have a hit. Is this something that bothers serious songwriters? Probably. There are way more songwriters, both serious and formulaic, than there are successful singers. Maybe the uninspired songwriters have something to fear with AI replacing them. But is anybody that worried about them? I suspect not.

But what about serious non-performing songwriters who really do pour their blood, sweat, and tears into their work. Will AIs replace them?

Maybe. But they have a hard enough time already getting their songs on the air. I have a hard time believing it will make much of a difference. If .00001% of the population lose their jobs doing what they love, I guess that's kind of sad, but I wouldn't call it a tragedy. The number of artisans creating elegant and artistic horse saddles is a small fraction of what it was 150 years ago. Times change.

Wednesday, January 18, 2023

Cheating with AI?

I saw an article about a teacher who got an essay from a student that was well-written. Maybe too-well written. Turns out the student used an AI to write it, and turned it in as their own work. The teacher (and the article) predicted massive changes to how school work is assigned, performed, and evaluated.

I'm not sure I understand why.

Cheat Your Way Through School?

Cheating has always been with us. When I was a student, that consisted of copying (verbatim or paraphrasing) from magazines, encyclopedias, or the smart kid in a different class. And while many kids got caught, many others did not. Teachers used to tell us that cheating didn't actually help us prepare for our futures, but kids are too now-focused to understand or care about that. We just knew that our parents would take away our TV privileges if we got a bad report card, so some kids cheated.

The Internet supposedly changed all that since it became trivially easy to cheat. As though lowering the effort would open the floodgates. But it didn't. Sure, you can buy essays on-line now, which makes it easier to cheat, but most kids still don't.

And now AI is about to change all that since it is even more trivially easy (and cheaper) to cheat.

I don't buy it. Cheaters are going to cheat, and it's not obvious to me that making it easier and cheaper to cheat will make a lot more kids into cheaters. 

Cheat Your Way Through Career?

And besides, why do we care? If cheaters make it all the way through college with much higher grades than are deserved, they will more-or-less reach their true level when they start their careers. I've had to fire some programmers who I wonder whether they ever wrote a line of code in their lives. Did they cheat their way through school? Or did the schools just do a bad job of preparing programmers? I don't know, and I don't care. I managed to hire some excellent programmers in spite of getting a few duds. And I suspect the same basic pattern exists in most careers.

I'll focus my discussion on the career of computer programming, but I suspect many of the concepts will apply to other careers.

Maybe the AIs are getting so good that a poor programmer that is good at cheating will produce just as good results as the excellent programmer down the hall. How is that fair? And does it even matter?

My programmers take poorly-described requirements and figure out what the user needs, and then figure out how to incorporate those needs into our existing product. Cheaters can't do that even if they have a great AI at their disposal.

In fact, even that is not what my senior programmers do. They figure out what our users want before the users do. When 29West was just getting started (2003-ish), I don't think there was such a thing as a brokerless general pub-sub messaging system. The financial services industry wanted low latency, but also wanted the flexibility of pub-sub. The idea 29West came up with was to combine peer-to-peer with reliable multicast and the pub-sub model. Figuring out how to do that required dreaming up new ways of doing things. Even if a really good AI existed back then, it would not have been trained on it.

I guess what I'm saying is that the most advanced AI technology available today is still based on the concept of training the AI with a lot of examples. It will be able to report the state of the art, but I can't see it advancing the state of the art. 

When Does Cheating Stop Being Cheating?

There was a period of time when I was in school when we couldn't use a calculator during a math test. You had to do the arithmetic by hand (and show your work). I suspect that still exists for a month or two when kids first learn what arithmetic is, but I suspect that calculators are now standard issue for even very young students. Is that bad?

I remember hearing people complain. "What if your batteries die? How will the supermarket employee add up your total?" Today, if a store's cash register goes down, commerce stops. And it's not because the employees can't do sums in their heads.

I also remember when poor spelling and grammar were impediments to career advancement. I guess it still is -- if you send me an email with lots of misspellings, I will think a little less of you. With spelling checkers built right into the email client, what's your excuse for not using it? (My mother-in-law used to disapprove of modern schooling where Latin is no longer a required subject. Her point was that learning Latin made you better at spelling. My point is, why bother?)

Remember cursive writing? Does anybody under 30 still use it? Do we still need to be good at shoeing horses? Starting fires with two sticks?

Do we really need everybody to be good at writing essays? Maybe it's time to consign that to the computer as well.

And yes, I know that writing essays is supposed to be a tool for exercising research skills and critical thinking. But is it really? Isn't the essay more of a measurement tool? I.e. if you did a good job of researching and thinking critically, then supposedly that will be reflected in the quality of your essay. But does that really work?

I don't know. And I've definitely strayed out of my area of expertise; I'll stop mansplaining now.

P.S.

I cut and pasted this post into ChatGPT and asked it to rewrite it better. It certainly shortened it, and included most of my main points. But it also missed a few points I consider important. And it made it a lot more boring, IMO. Then again, I always have liked to hear myself speak, so I'm biased.

OpenAI GPT-3: Initial Impressions

So, ChatGPT is at capacity right now, and I'm sure not going to sit here and hit refresh every few seconds to try to get in. So I went to https://beta.openai.com/examples and tried a few.

My impressions: disappointing given all the hype, but if I hadn't seen the hype, I would be reasonably impressed.

[UPDATE: I've since gotten into the real ChatGPT, and I must say it is more impressive than the sandbox I played with in this post. The sandbox is based on GPT-3, while ChatGPT is GPT-3.5.]


In the following examples, I am including output from OpenAI's GPT-3 model. In accordance with OpenAI's wishes, my name is Steve Ford, and the text shown in green is generated by an AI.

Analogy maker

My entry is first, followed by the AI response in green.

Get the logic right, then optimize.

It's like building a house: first make sure the foundation is solid, then make it look nice.

Not bad.


Do unto others as you would have them do unto you.

It's like a golden rule of giving and receiving; treat others as you would like to be treated.

Um, I don't think you can use a definition as an analogy. The point of an analogy is to improve understanding of a concept by re-structuring it in different terms that the reader might be more familiar with.


Look out for number one.

It's like putting on your own oxygen mask before helping others on an airplane.

Impressively good and bad. Yes, it kind of understood what the phrase means, but it also completely misses the essence of the phrase. "Watch out for number one" is all about being selfish (Collins dictionary says, "If you are looking out for number one, you are thinking of yourself rather than considering other people"), whereas the analogy is about doing something that might seem selfish at first glance, but is actually in support of generosity and the greater good.

Pro tip: don't use GPT-3 to learn the meaning of idioms.


Do unto others ... and then split Give a hand to those in need ... and then walk away.

This one might not be fair; I wonder how many under-50-year-olds would get that reference. Anyway, the analogy is the exact opposite of the original.

Pro tip: don't use GPT-3 to learn about 50-year-old memes.


There are other example applications that I would like to play with, but who has time for that? Over time, I'll probably have more to say about AI.


Invert the Meaning

I took the analog maker settings and changed the command (or whatever that first line is called) to "Invert the meaning:".

Time flies like an arrow.

Time stands still like a rock.

Not idiomatic, but impressive.


Smoking is bad for you.

Not smoking is bad for you.

Yep, it will lie if you tell it to.


Write an Essay

Took the analog maker settings and changed the command to "Write an essay:"

Why structured programming is bad.

Structured programming is a programming paradigm that relies on breaking down a program into smaller, manageable parts. It is a popular approach to programming because it is easy to understand and can be used to create complex programs. However, there are a number of drawbacks to structured programming that can make it a less than ideal choice for certain projects.

The first issue with structured programming is that it can be difficult to maintain. Because the code is broken down into smaller parts, it can be difficult to keep track of how the parts fit together. This can lead to bugs and errors that can be difficult to debug. Additionally, if changes need to be made to the program, it can be difficult to make sure that all the parts are updated correctly.

Another issue with structured programming is that it can be inflexible. Structured programming relies on a set of predetermined rules and structures, which can limit the options for solving a problem. If a programmer needs to come up with a creative solution, structured programming may not be the best choice.

Finally, structured programming can be slow. The small parts of a program can take a long time to process, especially if there are a lot of them. This can lead to programs that take a long time to run, which can be a problem for projects that require fast results.

In conclusion, structured programming can be a useful approach for certain projects, but it is not without its drawbacks. It can be difficult to maintain, inflexible, and slow. For projects that require creative solutions or fast results, it may be better to use an alternative approach.

Tuesday, December 27, 2022

Tgen: Traffic Generator Scripting Language

Oh no, not another little language! (Perhaps better known as a "domain-specific language".)

Yep. I wrote a little scripting language module intended to assist in the design and implementation of a useful network traffic generator. It's called "tgen" and can be found at https://github.com/fordsfords/tgen. I won't write much about it here other than to mention that it implements a simple interpreter. In fact, the simplicity of the parser might be the thing I'm most pleased about it, in terms of bang for buck. See the repo for details.

Little Languages Considered Harmful?

So yeah, I'm reasonably pleased with it. But I am also torn. Because "little languages" have both a good rap (Jon Bentley, 1986) and a bad rap (Olin Shivers, 1996).

In that second paper, Shivers complains that little languages:

  • Are usually ugly, idiosyncratic, and limited in expressiveness.
  • Basic linguistic elements such as loops, conditionals, variables, and subroutines must be reinvented and re-implemented. It is not an approach that is likely to produce a high-quality language design.
  • The designer is more interested in the task-specific aspects of his design, to the detriment of the language itself. For example, the little language often has a half-baked variable scoping discipline, weak procedural facilities, and a limited set of data types.
  • In practice, it often leads to fragile programs that rely on heuristic, error-prone parsers.
Of course, Shivers doesn't *really* think that little languages are a bad idea. He just thinks that they are usually implemented poorly, and his paper shows the right way to do it (in Scheme, a Lisp variant).

But there are some good arguments against developing little languages at all, even if implemented well. At my first job out of college, I wrote a little language to help in the implementation of a menu system. The menus were tedious and error-prone to write, and the little language improved my productivity. I was proud of it. An older and wiser colleague gently told me that there are some fundamental problems with the idea. His reasoning was as follows:

  • We already have a programming language that everybody knows and is rich and well-tested.
  • You've just invented a new programming language. It probably has bugs in the parser and interpreter that you'll have to find and fix. Maybe the time you spend doing that is paid for by the increased productivity in adding menus. Maybe not.
  • The new language is known by exactly one person on the planet. Someday you'll be on a different project or a different company, and we can't hire somebody who already knows it. There's an automatic learning curve.
  • Instead of writing an interpreter for a little language, you could have simply designed a good API for the functional elements of your language, and then used the base language to call those functions. Then you have all the base language features at your disposal, while still having a high level of abstraction to deal with the menus.

He was a nice guy, so while his criticism stung, he didn't make it personal, and he was tactful. I was able to see it as a learning experience. And ever since then, I've been skeptical of most little languages.

Then Why Did I Make a Little Language?

Well, first off, I *DID* create an API. So instead of writing scripts in the scripting language, you *can* write them in C/C++. I expect this to be interesting to QA engineers wanting to create automated tests that might need sophisticated usage patterns (like waiting to receive a message before sending a burst of outgoing traffic). I would not want to expand my scripting language enough to support that kind of script. So being able to write those tests in C gives me all the power of C while still giving me the high level of abstraction for sending traffic.

But also, a network traffic generator is a useful thing to be able to run interactively for ad-hoc testing or exploration. It would be annoying to have to recompile the whole tool from source each time you want to change the number or sizes of messages.

Of course, most traffic generation tools take care of that by letting you specify the number and sizes of the messages via command-line options or GUI dialogs. But most of them don't let you have a message rate that changes over time. My colleagues and I deal with bursty data. To properly test the networking software, you should be able to create bursty traffic. Send at this rate for X milliseconds, then at a much higher rate for Y milliseconds, etc. The "tgen" module lets you "shape" your data rate in non-trivial ways.

Did I get it right? My looping construct is ugly and could only be loved by an assembly language programmer. Maybe I should have made it better? Or just omitted it? Dunno. I'm open to discussion.

Anyway, I'm hoping that others will be able to take the tgen module and do something useful with it.

Sunday, December 25, 2022

Critical Program Reading (1975) - 16mm Film

 I find this film delightful: Critical Program Reading (1975) - 16mm Film

I would love to know about choices the filmmaker made. The vibe seems very 1960s; was that intentional?

I also didn't know that structured programming methods were that old. I was born in 1957. According to Wikipedia, the concept of "structured programming" was born in those years, although the term was first popularized by Dijkstra in his 1968 open letter "Goto Considered Harmful".

For some reason, I thought the "structured programming wars" were during the mid-to-late 1980s, when the old-school "spaghetti code" techniques were finally being replaced by more modern techniques. I guess I thought this because I clearly remember the "Goto Considered Harmful" Considered Harmful letter, and its replies. But the true war against spaghetti code was pretty much over by then. The battle at that point was not about if we should use descriptive identifier naming, block structure, and simple control flow. It was about whether the abolition of the goto should be absolute.

<rant read="optional">

I also remember feeling insulted by Dijkstra's On a Somewhat Disappointing Correspondence. He said that a competent professional programmer in 1987 should know the theorem of "the bounded linear search" and should be able to derive that theorem and its proof. I could not even read the theorem since I was not familiar with the notation. And none of my colleagues could either. I suspect that a small percentage of professional programmers of the day (and today also) would qualify as competent by Dijkstra's standards.

In retrospect, I do have some sympathy for Dijkstra's opinion. He knew full well that his standards did not match those of the programming profession. That's exactly what he was complaining about. He strongly felt that programmers should be grounded in the science of computer science. He wanted programmers to spend their time proving their algorithms correct, not slavishly (and inadequately) testing them. I suspect he wasn't saying that the programmers of the day were bad or stupid people, but that they were improperly educated and then released into the field prematurely. I suspect he might agree with, "You are not competent, but it's probably not your fault. It's more the fault of the university that gave you a degree and the company that hired you." Part of me that wishes that I and the rest of the world were more dedicated to rigor and depth of mastery.

But, of course, we are not. Airline pilots are not trained to design an airplane. House painters can't give you the chemical formulae of their paints. I remember when my wife had cancer, she was advised against using a surgeon who was a highly respected researcher; she should use a doctor who does hundreds of these surgeries per year. You usually want an experienced practitioner, not a theoretician.

Is the same thing true of programmers? Well, I will note that Dijkstra's program uses single-letter variables, a definite no-no in most structured programming. If he had submitted that to me as part of a job application, I doubt I would have hired him. But maybe that's because *I* am not competent. Maybe software would be much better today if we programmers met Dijkstra's standards. But there would be a heck of a lot less software out there, that's for sure. And cynical humor aside, I do rather like having a smart phone with a GPS.

</rant>

Friday, October 28, 2022

So long, Delta

Note to my technically-oriented readers (all two of you). You might want to skip this post. It is not technical in nature.

I used to use Delta Airlines a lot. Not anymore.

https://www.cnbc.com/amp/2022/10/22/delta-air-lines-settles-with-pilot-who-raised-safety-concerns.html

https://www.seattletimes.com/business/boeing-aerospace/delta-weaponized-mental-health-rules-against-a-pilot-she-fought-back/

So, am I refusing to fly Delta because they were unfair and downright evil to an employee? Heck no. Happens all the time. Sometimes employers get caught, and they have to make amends. Sometimes they don't get caught, and they get away with it. If I refused to deal with companies who misbehave, I would have to become a hermit doing subsistence farming with rocks and sticks.

No, I'm refusing because even after Delta's dirty tricks were exposed and their claims totally debunked, the manager behind it all was "...promoted to CEO of Endeavor, Delta’s regional carrier subsidiary, and senior vice president of Delta Connection, the airline’s partnership with regional carriers Skywest and Republic Airways."

DUDES! When you get caught, you're supposed to at least pretend to be sorry! And somebody has to become the sacrificial lamb. But I guess that only happens when the lamb is a low-level employee. When it's a bigwig, then better to just close ranks, settle, and pretend everything's cool. Nothing to see here, move along.

Sorry, no can do. And yes, I know, I don't have all the facts. Maybe more facts will come to light. Maybe Graham truly believed he was doing the ethical and moral thing.

What do you say, Graham? Did you genuinely believe in your heart that Petitt wasn't safe to fly? Based on her complaints about safety? Feel free to post a reply here. If it sounds credible, I'll post a retraction and fly Delta again.

Hopefully I can sleep tonight over the sound of crickets.

Tuesday, July 5, 2022

The Time I Found a Hardware Bug

As I approach age 65, I've been doing some reminiscing. And discovering that my memory is imperfect. (pause for laughter) So I'm writing down a story from earlier days in my career. The story has no real lessons that are applicable today, so don't expect to gain any useful insights. But I remember the story fondly so I don't want it to slip completely from my brain.

WARNING: this story is pretty much self-indulgent bragging. "Wow, that Steve guy sure was smart! I wonder what happened."

I think it was 1987, +/- 2 years (it was definitely prior to Siemens moving to Hoffman Estates in 1989).

The product was "Digitron", an X-ray system based on a Multibus II backplane and an 8086 (or was it 80286?) CPU board running iRMX/86. I think the CPU board was off-the-shelf from Intel, but most of the rest of the boards were custom, designed in-house.

At some point, we discovered there was a problem. We got frequent "spurious interrupts". I *think* these spurious interrupts degraded system performance to the degree that sometimes the CPU couldn't keep up with its work, resulting in a system failure. But I'm not sure -- maybe they just didn't like having the mysterious interrupts. At any rate, I worked on diagnosing it.

The CPU board used an 8259A interrupt controller chip (datasheet here or here) that supported 8 vectored interrupts. There was a specific hardware handshake between the 8259A and the CPU chip that let the 8259A tell the CPU the interrupt vector. The interrupt line is asserted and had to be held active while the handshake took place. At the end of the hardware handshake, the CPU calls the ISR, which interacts with the interrupting hardware. The ISR clears the interrupt (i.e. makes the hardware stop asserting the interrupt line) before returning.

According to the 8259A datasheet, spurious interrupts are the result of an interrupt line being asserted, but then removed, before the 8259A can complete the handshake. Essentially the chip isn't smart enough to remember which interrupt line was asserted if it went away too quickly. So the 8259A declares it "spurious" and defaults to level 7.

I don't remember how I narrowed it down, but I somehow identified the peripheral board that was responsible.

For most of the peripheral boards, there was a single source of interrupt, which used an interrupt line on the Multibus. But there was one custom board (don't remember which one) where they wanted multiple sources of interrupt, so the hardware designer included an 8259A on that board. Ideally, it would have been wired to the CPU board's 8259A in its cascade arrangement, but the Multibus didn't allow for that. So the on-board 8259A simply asserted one of the Multibus interrupt lines and left it to the software to determine the proper interrupt source. The 8259A was put in "polled mode" and ISR for the board's interrupt would read the status of the peripheral 8259A to determine which of the board's "sub-interrupts" had happened. The ISR would then call the correct handler for that sub-interrupt.

Using an analog storage scope, I was able to prove that the peripheral board's 8259A did something wrong when used in its polled mode. The peripheral board's 8259A asserted the Multibus interrupt level, which led to the CPU board properly decoding the interrupt level and invoking the ISR. The ISR then performed the polling sequence, which consisted of reading the status and then writing something to clear the interrupt. However, the scope showed that during the status read operation, while the multibus read line was asserted, the 8259A released its interrupt output. When the read completed, the 8259A re-asserted its interrupt. This "glitch" informed the CPU board's 8259A that there was another interrupt starting. Then, when the ISR cleared the interrupt, the 8259A again released its interrupt. But from the CPU board's 8259A's point of view, that "second" interrupt was not asserted long enough for it to handshake with the CPU, so it was treated as a spurious interrupt.

(Pedantic aside: although I use the word "glitch" to describe the behavior, that's not right terminology. A glitch is typically caused by a hardware race condition and would have zero width of all hardware had zero propagation delay. This wasn't a glitch because the release and re-assert of the interrupt line was tied to the bus read line. No race condition. But it resembled a glitch, so I'll keep using that word.)

HARDWARE BUG?

The polling mode of operation of the 8259A was a documented and supported use case. I consider it a bug in the chip design that it would glitch the interrupt output during the status read operation. But I didn't have the contacts within Intel to raise the issue, so I doubt any Intel engineer found out about it.

WORKAROUND

I designed a simple workaround that consisted of a chip - I think it was a triple, 3-input NAND gate, or maybe NOR, possibly open collector - wired to be an AND function. The interrupt line was active low, so by driving it with an AND, it was possible to force it to active (low). I glued the chip upside-down onto the CPU board and wire-wrapped directly to the pins. One NAND gate was used as an inverter to make another NAND gate into an AND circuit. One input to the resulting AND was driven by the interrupt line from the Multibus, and the other input was driven by an output line from a PIO chip that the CPU board came with but wasn't being used. I assume I had to cut at least one trace and solder wire-wrap wire to pads, but I don't remember the details.

The PIO output bit is normally inactive, so that when the peripheral board asserts an interrupt, the interrupt is delivered to the CPU. When the ISR starts executing, the code writes the active value to the PIO bit, which forces the AND output to stay low. Then the 8259A is polled, which glitched the multibus interrupt line, but the AND gate keeps the interrupt active, masking the glitch. Then the ISR writes a inactive to the PIO and clears the interrupt, which releases the Multibus interrupt line. No more spurious interrupt.

Kludge? Hell yes! And a hardware engineer assigned to the problem figuratively patted me on the head and said they would devise a "proper" solution to the spurious interrupt problem. After several weeks, that "proper" solution consisted of using a wire-wrap socket with its pins bent upwards so that instead of wire-wrapping directly to the chip's pins, they wire-wrapped to proper posts.

Back in those days, people didn't have a digital camera in their pocket, so I have no copy of the picture I took of the glitch. And I'm not confident that all the details above are remembered correctly. E.g. I kind of remember it was a NOR gate, but that doesn't make logical sense. Unless maybe I used all 3 gates and boolean algebra to make an AND out of NOR gates? I don't remember. But for sure the point was to mask the glitch during the execution of the ISR.

But I remember the feeling of vindication. My hardware training made me valuable over a pure software engineer.

Sunday, July 3, 2022

Math Nerd?

I just made two posts on recreational math. I'm what you might call a math nerd wannabe. I'm NOT a math nerd - I don't have a flair or the rigor required to make that claim - but I've always wished I were.

I used to read Martin Gardner in Scientific American. And I tried to enjoy it with mixed success. More recently, I subscribed to Numberphile, but finally unsubscribed when I realized I tend to lose focus about halfway through most of the videos. And 3Blue1Brown? The same but more. It's not just that I have trouble following the math (although I often do), I'm just not interested enough to try hard enough. But darn it, I wanna be! :-)

When I was very young, I aspired to be a scientist so I could invent cool things. Never mind that theoretical scientists and inventors tend to be very different kinds of people; in both cases, I don't have the knack. I think I'm more of a hobbyist who discovered that he could be paid well for his hobby. I've never invented a cool algorithm, but I've enjoyed implementing cool algorithms that real scientists have invented. I like tinkering, taking things apart to see what makes them tick, and sometimes even putting them back together.

Not that there's anything wrong with this. I've led, and continue to lead, a happy, productive, and fulfilling life. I'm reasonably well-liked and respected by my peers. I have no complaints about how life has treated me.

But I am sometimes wistful about what might have been ... being a math nerd/scientist/inventor would be pretty cool too.

Anyway, I won't be making regular posts about math ... unless I do. ;-)

Information in the Noise

Wow, a non-math nerd posting twice about math. What's that about?

Derek Muller of Veritasium posted a video about the 100 prisoners puzzle (I like "puzzle" in this context better than "riddle" or "problem"). Unlike my earlier post, I have no complaints about this video. Derek is one of the top-tier educational YouTubers, and he did a fantastic job of explaining it. (As before, I'm not going to explain it here; watch his video. Seriously, just watch it.)

So why do I feel the need to comment? I guess I feel I have a small but interesting (to me) tidbit to add.

Derek et al. describe the puzzle's "linked list" solution (my name) as giving a counter-intuitive result, and I guess I have to agree. The numbers are distributed to the boxes randomly, so how could any strategy give a prisoner a better chance of success than random selection? IT'S RANDOM!!!!!!

AN INTUITIVE UNDERSTANDING

And here's my tidbit: it's not as random as it seems. For this puzzle, the numbers are assigned randomly to boxes, without replacement. I.e., you won't find a given number in more than one box, and no number between 1 and 100 is skipped. This is obvious for the setup of the puzzle, but randomizing without replacement puts constraints on the system. Those constraints add information to the noise.

If prisoner number 13 randomly opens box 52, he knows he has a one in 100 chance of seeing his number in that box. He opens it and sees the number 1. He now knows FOR SURE that no other box has the number 1 in it. So his second random choice will have a one in 99 chance of being his number. Each choice gives some information that affects the probability of the next choice. (I.e., the samples are not independent.)

It is these constraints that lead directly to the cycles that are at the heart of the puzzle. And clever people have calculated the probability of having a cycle greater than 50 to be about 0.688. So the "linked list" strategy has ~= 0.312 probability of the prisoners being set free. That's the point of Derek's video.

Let's ruin the puzzle for a moment. Let's assign a random number between 1 and 100 to each box with replacement. It's entirely possible, even probable, that you'll have duplicates (the same number in more than one box) and skips (a number that is not in any box). One effect of this change is that the numbers will no longer necessarily be arranged in cycles. You can have many numbers NOT in a cycle. So the "linked list" solution to the puzzle doesn't improve your chances of survival over pure chance. Getting rid of the "without replacement" constraint removes the information from the noise.

This is how I get an intuitive feeling that you can have a much higher probability of success with the "linked list" solution to the original puzzle - you're taking advantage of the information that's in the noise.

WITH REPLACEMENT

What about my ruined version, where the numbers are assigned to boxes with replacement? To start with, let's calculate the probability that you get a distribution of numbers in boxes that is even possible for the prisoners to win (i.e., every number 1-100 is assigned exactly once). My probability-fu is weak, but I'll try. I think it is (100!)/(100**100) ~= 9.33e-48. Wow, that's a really low probability.

On the off chance that you get a solvable distribution, the probability of success with the linked list solution is ~= 0.312. So the total probability of success for my ruined version, WITH the linked list solution, is ~= 6.4e-43. If instead the prisoners choose their boxes randomly, then it's ~= 7.36e-73.

The prisoners had better appeal to Amnesty International.

There is no Vase

 I'm not a math nerd, so I probably shouldn't be posting on this subject. But when has a lack of expertise ever stopped me from having an opinion?

I just watched the Up and Atom video: An Infinity Paradox - How Many Balls Are In The Vase? In it, Jade describes the Ross–Littlewood paradox related to infinite pairings. I liked the video but was not satisfied with the conclusion.

I won't give the background; if you're interested in this post, go watch the video and skim the Wikipedia article. Basically, she presents the "Depends on the conditions" solution (as described in the Wikipedia article) without mentioning the "underspecified" and "ill-informed" solutions. And I guess that's an OK choice since the point of her video was to talk about infinities and pairings. But she kept returning to the question, "how many balls are there *actually*?"

Infinity math has many practical applications, especially if the infinity is related to the infinitely small. An integral is frequently described as the sum of the areas of rectangles under a curve as the width of the rectangles becomes infinitesimal - i.e., approaches zero. This gives a mathematically precise calculation of the area. Integrals are a fundamental tool for any number of scientific and engineering fields.

But remember that math is just a way of modeling reality. It is not *really* reality.

There is no such thing as an infinitesimal anything. There is a minimum distance, a minimum time, and the uncertainty principle guarantees that even as you approach the minimum in one measure, your ability to know a different measure decreases. When the numbers become small enough, the math of the infinitesimal stops being an accurate model of reality, at least not in the initially intuitive ways.

But they are still useful for real-world situations. Consider the paradox of Achilles and the tortoise, one of Zeno's paradoxes. (Again, go read it if you don't already know it.) The apparent paradox is that Achilles can never catch up to the tortoise, even though we know through common experience that he will catch up with and pass the tortoise. The power of infinity math is that we can model it and calculate the exact time he passes the tortoise. The model will match reality ... unless an eagle swoops down, grabs the tortoise, and carries it across the finish line. :-)

But models can break down, even without eagles, and a common way for infinity models to break down is if they don't converge. 1/2 plus 1/4 plus 1/8 plus 1/16 ... converges on a value (1). As you add more and more terms, it approaches a value that it will never exceed with a finite number of terms. So we say that the sum of the *infinite* series is *equal* to the limit value, 1 in this case. But what about 1/2 plus 1/3 plus 1/4 plus 1/5, etc.? This infinite series does NOT converge. It grows without bound. And therefore, we cannot claim that it "equals" anything at infinity. We could claim that the sum equals infinity, but this is not well defined since infinity is not a number.

Here's a similar train of thought. What is 1/0? If you draw a graph of 1/X, you will see the value grow larger and larger as X approaches 0. So 1/0 must be infinity. What is 0 * (1/0)? Again, if you graph 0 * (1/X), you will see a horizontal line stuck at zero as X approaches 0. So I guess that 0 * (1/0) equals 0, right? Not so fast. Let's graph X * (1/X). That is a horizontal line stuck at 1. So as X approaches 0, X * (1/X) equals 1. So 0 * 1/0 equals 1. WHICH ONE IS RIGHT???????? What *really* is 0 * (1/0)?

The answer is that the problem is ill-formed. The 1/X term does not converge. The value of 1/0 is not "equal to infinity", it is undefined. My train of thought above is similar to the fallacious "proof" that 1 equals 2. And it seems to me that the "proof" that the number of balls in the vase can be any number you want it to be is another mathematical fallacy.

The only way to model the original vase problems is to draw a graph of the number of balls in the vase over time. Even in the case where you remove the balls sequentially starting at 1, you will see the number of balls growing without bound as time proceeds. Since this function does not converge, you can't say that it "equals" anything at the end. But it tends towards infinity, so claiming that it equals some finite value *at* the end is another example of an invalid application of math to reality.

But I shouldn't complain. Jade used the "paradox" to produce an engaging video teaching about pairing elements in infinite sets. And she did a good job of that.

Wednesday, May 4, 2022

CC0 vs GPL

I've been writing little bits and pieces of my own code for many years now. And I've been releasing it as CC0 ("public domain"; see below). I've received a bit of criticism for it, and I guess I wanted to talk about it.

I like to write software. And I like it when other people benefit from my software. But I don't write end-user software, so the only people who benefit from my code are other programmers. But that's fine, I like a lot of programmers, so it's all good.

There are different ways I could offer my software. Much open-source software is available under a BSD license, an Apache license, or an MIT license. These differ in ways that are probably important to legal types, but for the most part, they mean that you can use the code for pretty much any purpose as long as you give proper attribution to the original source. So if I write a cool program and use some BSD code, I need to state my usage of that code somewhere in my program's documentation.

So maybe I should do that. After all, if I put in the effort to write the code, shouldn't I get the credit?

Yeah, that and a sawbuck will get me a cup of coffee. I don't think those attributions are worth much more than ego-boosting, and I guess my programmer ego doesn't need that boost.

With the exception of the GNU Public License (GPL), I don't think most open source ego-boosting licenses buy me anything that I particularly want. And they do introduce a barrier to people using my code. I've seen other people's code that I've wanted but decided not to use because of the attribution requirement. I don't want the attributions cluttering up my documentation, and adding licensing complications to anybody who wants to use my code. (For example, I was using somebody else's getopt module for a while, but realized I wasn't giving proper attribution, so I wrote my own.)

But what about GNU?

The GPL is a different beast. It is intended to be *restrictive*. It puts rules and requirements for the use of the code. It places obligations on the programmers. The stated goal of these restrictions is to promote freedom.

But I don't think that is really the point of GPL. I think the real point of GPL is to let certain programmers feel clean. These are programmers who believe that proprietary software is evil, and by extension, any programmer who supports proprietary software is also evil. So ignoring that I write proprietary software for a living, my CC0 software could provide a small measure of support for other proprietary software companies, making their jobs easier. And that makes me evil. Not Hitler-level evil, but at least a little bit evil.

If I license my code under GPLv3, it will provide the maximum protection possible for my open-source code to not support a proprietary system. And that might let me sleep better at night, knowing that I'm not evil.

Maybe somebody can tell me where I'm wrong on this. Besides letting programmers feel clean, what other benefit does GPL provide that other licenses (including CC0) don't?

I've read through Richard Stallman's "Why Open Source Misses the Point of Free Software" a few times, and he keeps coming back to ethics, the difference between right and wrong. Some quotes:

  • "The free software movement campaigns for freedom for the users of computing; it is a movement for freedom and justice."
  • "These freedoms are vitally important. They are essential, not just for the individual users' sake, but for society as a whole because they promote social solidarity—that is, sharing and cooperation."
  • "For the free software movement, free software is an ethical imperative..."
  • "For the free software movement, however, nonfree software is a social problem..."
I wonder what other things a free software advocate might believe. Is it evil to have secret recipes? Should Coke's secret formula be published? If I take a recipe that somebody puts on youtube and I make an improvement and use the modified recipe to make money, am I evil? What if I give attribution, saying that it was inspired by so-and-so's recipe, but I won't reveal my improvement? Still evil?

How about violin makers that have secret methods to get a good sound? Evil?

I am, by my nature, sympathetic to saying yes to all of those. I want the world to cooperate, not compete. I used to call myself a communist, believing that there should be no private property, and that we should live according to, "From each according to his ability, to each according to his needs". And I guess I still do believe that, in the same way that I believe we should put an end to war, cruelty, apathy, hatred, disease, hunger, and all the other social and cultural evils.

Oh, and entropy. We need to get rid of that too.

But none of them are possible, because ... physics? (That's a different subject for a different day.)

But maybe losing my youthful idealism is nothing to feel good about. Instead of throwing up my hands and saying it's impossible to do all those things, maybe I should pick one of them and do my best to improve the world. Perhaps the free software advocates have done exactly that. They can't take on all the social and cultural ills, so they picked one in which they could make a difference.

But free software? That's the one they decided was worth investing their altruism?

Free software advocates are always quick to point out that they don't mean "free" as in "zero cost". They are referring to freedoms - mostly the freedom to run a modified version of a program, which is a freedom that is meaningless to the vast majority of humanity. I would say that low-cost software is a much more powerful social good. GPL software promotes that, but so do the other popular open-source licenses. (And so does CC0).

So anyway, I guess I'm not a free software advocate (big surprise). I'll stick with CC0 for my code.

What is CC0

The CC0 license attempts to codify the concept of "public domain". The problem with just saying "public domain" is that the term does not have a universally agreed-upon definition, especially legally. So CC0 is designed to approximate what we think of as public domain.

Tuesday, February 15, 2022

Pathological cases

Jacob Kaplan-Moss said something wonderful yesterday:

Designing a human process around pathological cases leads to processes that are themselves pathological.

This really resonated with me.

Not much to add, just wanted to share.

Thursday, February 3, 2022

Nice catch, Grammarly

 I was writing an email and accidentally left out a word. I meant to write, "I've asked the team for blah...". But I accidentally omitted "asked", so it just said, "I've the team for blah...".

Grammarly flagged "I've", suggesting "I have". Since my brain still couldn't see my mistake, I thought it was complaining about "I've asked the team...". I was about to dismiss, but decided to click the "learn more" link. It said that, except in British English, using the contraction "I've" to express possession sounds unnatural or affected. As in: "Incorrect: I've a new car".

Ah HAH! That triggered me to notice the missing word "asked". I put it in, and Grammarly was happy. I consider this a good catch. Sure, it misdiagnosed the problem, but it knew it was a problem.

Thanks, Grammarly!


Wednesday, January 5, 2022

Bash Process Substitution

I generally don't like surprises. I'm not a surprise kind of guy. If you decide you don't like me and want to make me feel miserable, just throw me a surprise party.

But there is one kind of surprise that I REALLY like. It's learning something new ... the sort of thing that makes you say, "how did I not learn this years ago???"

Let's say you want the standard output of one command to serve as the input to another command. On day one, a Unix shell beginner might use file redirection:

$ ls >ls_output.tmp
$ grep myfile <ls_output.tmp
$ rm ls_output.tmp

On day two, they will learn about the pipe:

$ ls | grep myfile

This is more concise, doesn't leave garbage, and runs faster.

But what about cases where the second program doesn't take its input from STDIN? For example, let's say you have two directories with very similar lists of files, but you want to know if there are any files in one that aren't in the other.

$ ls -1 dir1 >dir1_output.tmp
$ ls -1 dir2 >dir2_output.tmp
$ diff dir1_ouptut.tmp dir2_output.tmp
$ rm dir[12]_output.tmp

So much for conciseness, garbage, and speed.

But, today I learned about Process Substitution:

$ diff <(ls -1 dir1) <(ls -1 dir2)

This basically creates two pipes, gives them names, and passes the pipe names as command-line parameters of the diff command. I HAVE WANTED THIS FOR DECADES!!!

And just for fun, let's see what those named pipes are named:

$ echo <(ls -l dir1) <(ls -1 dir2)
/dev/fd/63 /dev/fd/62

COOL!

(Note that echo doesn't actually read the pipes.)


VARIATION 1 - OUTPUT

The "cmda <(cmdb)" construct is for cmda getting its input from the output of cmdb. What about the other way around? I.e., what if cmda wants to write its output, not to STDOUT, but to a named file, and you want that output to be the standard input of cmdb? I'm having trouble thinking here of a useful example, but here's a not-useful example:

cp file1 >(grep xyz)

I say this isn't useful because why use the "cp" command? Why not:

cat file1 | grep xyz

Or better yet:

grep xyz file1

Most shell commands write their primary output to STDOUT. I can think of some examples that don't, like giving an output file to tcpdump, or the object code out of gcc, but I can't imagine wanting to pipe that into another command.

If you can think of a good use case, let me know.


VARIATION 2 - REDIRECTING STANDARD I/O

Here's something that I have occasionally wanted to do. Pipe a command's STDOUT to one command, and STDERR to a different command. Here's a contrived non-pipe example:

process_foo 2>err.tmp | format_foo >foo.txt
alert_operator <err.tmp
rm err.tmp

You could re-write this as:

process_foo > >(format_foo >foo.txt) 2> >(alert_operator)

Note the space between the two ">" characters - this is needed. Without the space, ">>" is treated as the append redirection.

Sorry for the contrived example. I know I've wanted this a few times in the past, but I can't remember why.


And for completeness, you can also redirect STDIN:

cat < <(echo hi)

But this is the same as:

echo hi | cat

I can't think of a good use for the "< <(cmd)" construct. Let me know if you can.


EDIT:

I'm always amused when I learn something new and pretty quickly come up with a good use for it. I had some files containing a mix of latency values and some log messages. I wanted to "paste" the different files into a single file with multiple columns to produce a .CSV. But the log messages were getting in the way.

paste -d "," <(grep "^[0-9]" file1) <(grep "^[0-9]" file2) ... >file.csv

Done! :-)