Wednesday, January 25, 2023

mkbin: make binary data

We've all been there. We want a set of bytes containing some specific binary data to feed into some program. In my case, it's often a network socket that I want to push the bytes into, but it could be a data file, an RS-232 port, etc.

I've written this program at least twice before in my career, but that was long before there was a GitHub, so who knows where that code is? Yesterday I wanted to push a specific byte sequence to a network socket, and I just didn't feel like using a hex editor to poke the bytes into a file.

So I wrote it once again:

It's a "little language" (oh boy) that lets you specify the binary data in hex or decimal, as 8, 16, 32, or 64-bit integers, big or little endian (or a mix of the two), or ASCII. 


For example, let's say I want to get the home page from yahoo using interactively from a shell prompt:

./ | nc 80
"GET / HTTP/1.1" 0x0d0a
"Host:" 0x0d0a0d0a

HTTP/1.1 301 Moved Permanently

(I typed the yellow, yahoo server returned the ... blue? Cyan? I'm not good with colors.) Naturally, yahoo wants me to use https, so it is redirecting me. But this is just an example.

Here's a shell script that does the same thing with some comments added:

./ <<__EOF__ | nc 80
"GET / HTTP/1.1" 0x0d0a       # Get request
"Host:" 0x0d0a0d0a  # double cr/lf ends HTTP request

Sometimes it's just easier to echo the commands into mkbin to create a one-liner:

echo '"GET / HTTP/1.1" 0x0d0a "Host:" 0x0d0a0d0a' |
  ./ | nc 80

(Note the use of single quotes to ensure that the double quotes aren't stripped by the shell; the program needs the double quotes.)


So far, we've seen commands for inputting ASCII and arbitrary hex bytes. Here are two 16-bit integers with the value 13, first specified in decimal, then in hex:

$ echo '16d13 16xd' | ./ | od -tx1
0000000 00 0d 00 0d

As you can see, it defaults to big endian.

Here are two 32-bit integer value 13, first in little endian, then in big endian:

$ echo '!endian=0 32d13 !endian=1 32xd' | ./ | od -tx1
0000000 0d 00 00 00 00 00 00 0d

You can also do 64-bit integers (64d13 64xd) and even 8-bit integers (8d13 8xd).


The construct I used earlier with 0x0d0a encodes an arbitrary series of bytes of any desired length. Note that it must have an even number of hex digits. I.e. 0xd is not valid, even though 8xd is.


Finally, be aware that the string construct does not have fancy C-like escapes, like "\x0d". The backslash only escapes the next character for inclusion and is only useful for including a double quote or a backslash into the string. For example:

$ echo '"I say, \"Hi\\hello.\"" 0x0a' | ./ | od -tx1
0000000 49 20 73 61 79 2c 20 22 48 69 5c 68 65 6c 6c 6f
0000020 2e 22 0a
$ echo '"I say, \"Hi\\hello.\"" 0x0a' | ./
I say, "Hi\hello."

Thursday, January 19, 2023

Nick Cave has Nothing to Fear

Nick Cave doesn't like ChatGPT.

Somebody asked Chat to compose a song in the style of Nick Cave. Nick didn't like it, calling it "replication as travesty" among other things.

I think Nick and other successful singer-songwriters have nothing to fear.

First of all, replication is nothing new. Beginner musicians imitate the styles of their favorite artists all the time. The good ones eventually find their own voices. But what about the wannabes that just get REALLY good at emulating their hero's style? Think "tribute band". Nick doesn't fear them. Nick Cave fans will buy Nick's music, even if a tribute band sounds just like him. Having that tribute band use an AI doesn't change that.

It might be a little dicier if somebody uses an AI to compose a song/painting/whatever in the style of a long-dead artist and claims that it is a newly-found genuine creation of the original artist. This is also nothing new. It's called forgery, and people have been dealing with that for as long as there has been an art market. I can't see reducing the cost of entry into the forgery profession will lead to a lot more fraud being perpetrated. If anything, it will make consumers even more suspicious of unlikely "discoveries", which is probably a good thing.

Nick's primary complaint seems to be that good music that touches a human's heart can only come from another human heart (usually a tortured one). Bad news, Nick. There's plenty of successful music out there that does not come from the creator's heart, and has no intention of touching the listener's heart. In my youth, they called it "bubble gum music". Cheery, maybe danceable, maybe a catchy riff that you find yourself humming. Think Monkeys or TV commercials. I suspect Nick wouldn't care much one way or the other if that music started coming from AIs instead of good-but-not-great-musicians-who-need-to-eat.

Is serious music in danger of being AI generated?

Well ... maybe? There are plenty of successful singers who are not songwriters. They mostly get their songs from non-performing songwriters. I'm sure that some of those songwriters are tortured artists whose blood and sweat come out in their songs. A lot of others are fairly non-creative mediocre songwriters who figured out a formula and got good at imitation. Give an uninspired song to a really successful singer, and you can have a hit. Is this something that bothers serious songwriters? Probably. There are way more songwriters, both serious and formulaic, than there are successful singers. Maybe the uninspired songwriters have something to fear with AI replacing them. But is anybody that worried about them? I suspect not.

But what about serious non-performing songwriters who really do pour their blood, sweat, and tears into their work. Will AIs replace them?

Maybe. But they have a hard enough time already getting their songs on the air. I have a hard time believing it will make much of a difference. If .00001% of the population lose their jobs doing what they love, I guess that's kind of sad, but I wouldn't call it a tragedy. The number of artisans creating elegant and artistic horse saddles is a small fraction of what it was 150 years ago. Times change.

Wednesday, January 18, 2023

Cheating with AI?

I saw an article about a teacher who got an essay from a student that was well-written. Maybe too-well written. Turns out the student used an AI to write it, and turned it in as their own work. The teacher (and the article) predicted massive changes to how school work is assigned, performed, and evaluated.

I'm not sure I understand why.

Cheat Your Way Through School?

Cheating has always been with us. When I was a student, that consisted of copying (verbatim or paraphrasing) from magazines, encyclopedias, or the smart kid in a different class. And while many kids got caught, many others did not. Teachers used to tell us that cheating didn't actually help us prepare for our futures, but kids are too now-focused to understand or care about that. We just knew that our parents would take away our TV privileges if we got a bad report card, so some kids cheated.

The Internet supposedly changed all that since it became trivially easy to cheat. As though lowering the effort would open the floodgates. But it didn't. Sure, you can buy essays on-line now, which makes it easier to cheat, but most kids still don't.

And now AI is about to change all that since it is even more trivially easy (and cheaper) to cheat.

I don't buy it. Cheaters are going to cheat, and it's not obvious to me that making it easier and cheaper to cheat will make a lot more kids into cheaters. 

Cheat Your Way Through Career?

And besides, why do we care? If cheaters make it all the way through college with much higher grades than are deserved, they will more-or-less reach their true level when they start their careers. I've had to fire some programmers who I wonder whether they ever wrote a line of code in their lives. Did they cheat their way through school? Or did the schools just do a bad job of preparing programmers? I don't know, and I don't care. I managed to hire some excellent programmers in spite of getting a few duds. And I suspect the same basic pattern exists in most careers.

I'll focus my discussion on the career of computer programming, but I suspect many of the concepts will apply to other careers.

Maybe the AIs are getting so good that a poor programmer that is good at cheating will produce just as good results as the excellent programmer down the hall. How is that fair? And does it even matter?

My programmers take poorly-described requirements and figure out what the user needs, and then figure out how to incorporate those needs into our existing product. Cheaters can't do that even if they have a great AI at their disposal.

In fact, even that is not what my senior programmers do. They figure out what our users want before the users do. When 29West was just getting started (2003-ish), I don't think there was such a thing as a brokerless general pub-sub messaging system. The financial services industry wanted low latency, but also wanted the flexibility of pub-sub. The idea 29West came up with was to combine peer-to-peer with reliable multicast and the pub-sub model. Figuring out how to do that required dreaming up new ways of doing things. Even if a really good AI existed back then, it would not have been trained on it.

I guess what I'm saying is that the most advanced AI technology available today is still based on the concept of training the AI with a lot of examples. It will be able to report the state of the art, but I can't see it advancing the state of the art. 

When Does Cheating Stop Being Cheating?

There was a period of time when I was in school when we couldn't use a calculator during a math test. You had to do the arithmetic by hand (and show your work). I suspect that still exists for a month or two when kids first learn what arithmetic is, but I suspect that calculators are now standard issue for even very young students. Is that bad?

I remember hearing people complain. "What if your batteries die? How will the supermarket employee add up your total?" Today, if a store's cash register goes down, commerce stops. And it's not because the employees can't do sums in their heads.

I also remember when poor spelling and grammar were impediments to career advancement. I guess it still is -- if you send me an email with lots of misspellings, I will think a little less of you. With spelling checkers built right into the email client, what's your excuse for not using it? (My mother-in-law used to disapprove of modern schooling where Latin is no longer a required subject. Her point was that learning Latin made you better at spelling. My point is, why bother?)

Remember cursive writing? Does anybody under 30 still use it? Do we still need to be good at shoeing horses? Starting fires with two sticks?

Do we really need everybody to be good at writing essays? Maybe it's time to consign that to the computer as well.

And yes, I know that writing essays is supposed to be a tool for exercising research skills and critical thinking. But is it really? Isn't the essay more of a measurement tool? I.e. if you did a good job of researching and thinking critically, then supposedly that will be reflected in the quality of your essay. But does that really work?

I don't know. And I've definitely strayed out of my area of expertise; I'll stop mansplaining now.


I cut and pasted this post into ChatGPT and asked it to rewrite it better. It certainly shortened it, and included most of my main points. But it also missed a few points I consider important. And it made it a lot more boring, IMO. Then again, I always have liked to hear myself speak, so I'm biased.

OpenAI GPT-3: Initial Impressions

So, ChatGPT is at capacity right now, and I'm sure not going to sit here and hit refresh every few seconds to try to get in. So I went to and tried a few.

My impressions: disappointing given all the hype, but if I hadn't seen the hype, I would be reasonably impressed.

[UPDATE: I've since gotten into the real ChatGPT, and I must say it is more impressive than the sandbox I played with in this post. The sandbox is based on GPT-3, while ChatGPT is GPT-3.5.]

In the following examples, I am including output from OpenAI's GPT-3 model. In accordance with OpenAI's wishes, my name is Steve Ford, and the text shown in green is generated by an AI.

Analogy maker

My entry is first, followed by the AI response in green.

Get the logic right, then optimize.

It's like building a house: first make sure the foundation is solid, then make it look nice.

Not bad.

Do unto others as you would have them do unto you.

It's like a golden rule of giving and receiving; treat others as you would like to be treated.

Um, I don't think you can use a definition as an analogy. The point of an analogy is to improve understanding of a concept by re-structuring it in different terms that the reader might be more familiar with.

Look out for number one.

It's like putting on your own oxygen mask before helping others on an airplane.

Impressively good and bad. Yes, it kind of understood what the phrase means, but it also completely misses the essence of the phrase. "Watch out for number one" is all about being selfish (Collins dictionary says, "If you are looking out for number one, you are thinking of yourself rather than considering other people"), whereas the analogy is about doing something that might seem selfish at first glance, but is actually in support of generosity and the greater good.

Pro tip: don't use GPT-3 to learn the meaning of idioms.

Do unto others ... and then split Give a hand to those in need ... and then walk away.

This one might not be fair; I wonder how many under-50-year-olds would get that reference. Anyway, the analogy is the exact opposite of the original.

Pro tip: don't use GPT-3 to learn about 50-year-old memes.

There are other example applications that I would like to play with, but who has time for that? Over time, I'll probably have more to say about AI.

Invert the Meaning

I took the analog maker settings and changed the command (or whatever that first line is called) to "Invert the meaning:".

Time flies like an arrow.

Time stands still like a rock.

Not idiomatic, but impressive.

Smoking is bad for you.

Not smoking is bad for you.

Yep, it will lie if you tell it to.

Write an Essay

Took the analog maker settings and changed the command to "Write an essay:"

Why structured programming is bad.

Structured programming is a programming paradigm that relies on breaking down a program into smaller, manageable parts. It is a popular approach to programming because it is easy to understand and can be used to create complex programs. However, there are a number of drawbacks to structured programming that can make it a less than ideal choice for certain projects.

The first issue with structured programming is that it can be difficult to maintain. Because the code is broken down into smaller parts, it can be difficult to keep track of how the parts fit together. This can lead to bugs and errors that can be difficult to debug. Additionally, if changes need to be made to the program, it can be difficult to make sure that all the parts are updated correctly.

Another issue with structured programming is that it can be inflexible. Structured programming relies on a set of predetermined rules and structures, which can limit the options for solving a problem. If a programmer needs to come up with a creative solution, structured programming may not be the best choice.

Finally, structured programming can be slow. The small parts of a program can take a long time to process, especially if there are a lot of them. This can lead to programs that take a long time to run, which can be a problem for projects that require fast results.

In conclusion, structured programming can be a useful approach for certain projects, but it is not without its drawbacks. It can be difficult to maintain, inflexible, and slow. For projects that require creative solutions or fast results, it may be better to use an alternative approach.