Pete's Log: Clearing out the queue
Entry #1532, (Random Crap)(posted when I was 29 years old.)
As mentioned earlier, I recently fixed the postpone (and consequently the resume) features of my blog software. Right now, there are 13 postponed entries. Two of these are test entries, one is a recent creation that I'm still working on. The rest of them are an interesting look back at entries that almost were. I think it's time to let them see the light of day.
Or at least some of them. Here are the six I've deemed worthy of sharing.
One problem is that the postpone feature does not tag the postponed entries with a date. So I can offer only vague estimates of when I wrote these. The general range is between April, 2001, when I first implemented the postpone feature, and early 2005, when it broke.
Without any further hesitation, here they are:
algebra
This must come from the fall of 2001, when I briefly enrolled in MATH 601. No idea where I was going with this.
Let G, H be finite cyclic groups. Then G x H is cyclic if and only if (|G|, |H|) = 1.
Summer 2001 Mix
Obviously written sometime after the summer of 2001
The latest mix CD of my creation, in celebration/memory of Summer 2001:
- Gorillaz -- Clint Eastwood
- Sum 41 -- Fat Lip
- Devo -- ???
- NOFX -- Champs Elysees
- Sugar Ray -- Someday
- Sugar Ray -- something else
- Sugar Ray -- when it's over
- Less Than Jake -- Responsibility
- Dance Hall Crashers -- Cricket
- Weezer -- Island in the Sun
- Blink 182 -- Rock Show
- Dixie Chicks -- something
- Minor Threat -- something
- NOFX -- Out of Angst
Some of these songs were released in 2001, others are older. Of the older ones, some of them are on the mix because I first discovered them during s2k+1, others because I feel like it, and it's my CD, dagumit!
Writing s2k+1 feels cool, despite the fact that it is equal in length to s2001, both in number of characters and in number of syllables. On the other hand, s2k+1 is shorter than 7fe824db38c84510777d762dd40f24e0, the md5sum of s2001.
ff
I have no clue when this is from
Pete's phrase du jour: "Word, am I fresh?"
oh, and I mustn't forget: "what's 12 inches long and white?" -- "nothing!"
This one was untitled. Time frame is sometime between April 2003, when I started at netViz, and April 2005, when Meg and I moved to Rockville, which reduced my commute to about 15 minutes. My guess would be sometime in 2003, though.
Inspired by a book Arun lent me called "The Origin of Consciousness in the Breakdown of the Bicameral Mind" I once went through a phase of out of body experimentation. It must have been during the Spring of 2002, since I think it was shortly before I met Meg. Basically, on occasions when I was unable to sleep, I would meditate on my breathing and then try to "move" where in my body my consciousness was located. After some experimentation I managed to even move it outside my body by a few inches. It was a very odd sensation and an interesting thing to experiment with.
I bought and read "Surely You're Joking, Mr Feynman" this week and was amused to find that he described a similar process in his chapter on altered states of mind. So yeah. That's that.
I'm becoming somewhat acclimated to suburban living. I've heard the average commute around here is about 45 minutes. So I should be pretty happy that mine takes only 25-30 minutes. The nice thing is that I live closer to DC than where I work, so I drive against traffic. If I tried to drive to work in the evening instead of in the morning, it would take 45 minutes to an hour.
Not only do I need to drive at least 25 minutes to get to work, I also need to drive quite a ways to hang out with most of the people I hang out with. Jason's house is only about 10 minutes from where I work, but 25 minutes from home. The game store where Jason and Brandon and I hang out at sometimes is 5 minutes from work, but 30 minutes from home. My friends from work that I've hung out with also live at least 20 minutes away from my house.
So I spend a lot of time driving.
I wonder
No idea, guessing late 2001 or early 2002. The funny thing is that a few weeks ago I was searching my log for the name Dijkstra, convinced I had already made this entry. Now I know why I couldn't find it.
Is it a fair statement to say that all current computational devices are simply optimized state machines?
I think computer science is an ideal field for me, because there has already been a famous computer scientist with a Dutch last name containing "ij" which makes it more likely for people to know how to pronounce mine. Dijkstra continues to inspire me.
Linda
My guess is late 2001, during my I like reading!!! phase, a.k.a. during the time I was taking Ritalin. I saved this one for last, since it's the most technical (and thus least interesting for many)
Linda, I think, is a cool concept. I heard it mentioned a few times, and Dr Freeh briefly explained it to me once, but recently I happened upon a Linda article and decided to read it, and since then I've found myself desiring more knowledge on the subject. I don't know where this new knowledge is going to take me, but I'm enjoying acquiring it, and it's bound to be useful at some level.
Linda and Friends
by Sudhir Ahuja, Nicholas Carriero and David Gelernter, Computer Magazine Aug 1986
This paper is an introduction to Linda. Linda is a set of primitives (in, out, read, eval) which can be added to a language in order to allow it to support the Linda programming model. The Linda model consists of the Tuple Space, an abstract, fully associative storage area for tuples. A tuple is simply an ordered set of values, ("Foo", 5) for example.
The Linda operators work as follows: out adds a tuple to the tuple space (TS). So out("Foo", 5) would add the ("Foo", 5) tuple to the TS. in removes a tuple from TS. in("Foo", int i) would match any tuple in TS that consists of the string "Foo" followed by an integer value. i is assigned the integer value of the tuple removed from TS. If more than one tuple match, one is chosen nondeterministically. If none match, the in operation blocks until a matching tuple is added to the TS. The read operation is like the in operation, except the tuple is not removed from the TS. The eval operation is not explained in detail in this paper, but it basically adds an active tuple to the TS, in some ways similar to a future, but not quite the same.
This paper also describes some existing implementation of Linda, as well as describing a proposal for specialized hardware to support Linda. Some performance numbers are also hinted at, indicating that Linda seems to generally scale well.
Linda in Context
by Nicholas Carriero and David Gelernter, CACM April 1989
This article compares the Linda model for parallel computing to other, more mainstream models. The models it is compared against are concurrent object oriented programming, concurrent logic programming, and functional programming.
Actually, Linda is also compared to message passing, but the authors quickly dismiss message passing as not being a good paradigm for writing parallel programs. With message passing, the programmer generally needs to associate both a sender and a receiver with every message, which limits flexibility. Linda provides an uncoupled means of communication, i.e. the sender of a message need not concern itself with who will receive it or when and vice versa.
One note the paper made was interesting. Regarding parallelizing compilers, the authors stated two issues. First, "compilers can't find parallelism that isn't there," that is, sometimes code needs to be rewritten using new algorithms in order to be parallelizable. Second, writing parallel code isn't necessarily as hard as some people believe it to be.
Regarding concurrent object oriented programming, the authors argued two points. First, the synchronization mechanisms in concurrent OO are not as clean as those in Linda (the only mechanism they really mention is monitors) and second, concurrent OO is little more than message passing with OO wrapped around it.
As for logic languages, the authors provide code examples that demonstrate that the Linda code much better describes the actions occuring. Also, the logic language Linda is compared against requires more static knowledge of the system when the code is being written than Linda does. I like the fact that the examples the authors chose to use to demonstrate the better flexibility of Linda are examples that were used by authors of Parlog86 (a parallel logic language) to demonstrate the benefits of their language. Linda definitely won in this case, I think.
Finally, functional languages are compared to Linda. The authors did not compare any explicitly parallel functional languages to Linda, but instead compared the results of using parallelizing compilers on functional languages to Linda. Functional languages are easy to parallelize, so this would be the area with the biggest win for parallelizing compilers. They showed a case in which the two languages compared favorably in terms of ease of understanding code. Some of the arguments made here were kinda shady, but made enough sense. The authors then went on to blow functional languages out of the water by demonstrating that there are many cases in which parallelism is not best described at a functional level, and presented a pretty good argument for Linda in those cases.
Overall, a fun paper because it discusses many interesting issues in language design for parallel computing.
Lime....