Super Crunchers

I am slightly disappointed. Two problems with the book (three if you count the translation which was forcing me to re-translate in English and back to Greek):

  1. The book feels like an extended version of an already long paper. This becomes tiring at times.
  2. The book explains why traditional experts in certain (all?) fields will be replaced by statistical methods. What the book does not say, is that a new breed of experts will rise: The ones that will devise the statistical models. The untrained eye may easily believe that all it takes is a lot of data and a few keystrokes.

I wrote the above lines while reading the book and before the final chapter. Chapter 8 makes up for the above, but somehow I expected more. Maybe I do not belong to the target audience.

PS1: While reading the book, I think I discovered how the character of Ian Frost (from Turing) was named.

PS2: Going through the author’s page I came across stickK which seems to be a perfect tool to fight procrastination.

Open Systems Security – an Architectural Framework

“In the old days” when security information was scarce and many of us began shaping our security mentality (be it white, gray or black) by reading “Improving the Security of Your Site by Breaking Into it” and the Computer Security FAQ and running tools like iss and Crack. I think it was there that I first read about Arto Karila’s PhD thesis. Even though it is an OSI based document, it helped understand basic concepts. However there were two problems with the document:

  1. It was hard to find, and
  2. It was in a weird PostScript format that even modern versions of ghostscript refuse to display.

With the help of a friend I managed to transform it to PDF and upload it to Scribd: Open Systems Security – an Architectural Framework

Of historical value mostly.

Update 2013/04/13: Now available at https://github.com/a-yiorgos/karila

Device drivers in Java?

The following tweet by @DSpinellis:

#USENIX @AnnualTech M. Renzelmann Decaf: Moving Device Drivers to a Modern Language (Java). He says performance impact is 1%

talks about “Decaf: Moving Device Drivers to a Modern Language” which describes a system where large parts of a driver can be written in a better language than C, the example here being Java.

I was certain that this was not the first time I had read about such an idea. This weekend I was able to go through my archive and find out the reference. Back in January 1997 in the NT Insider (Volume 4, Issue 1) Peter Viscarola, while criticizing the multitude of startups founded by anyone who could code a Java applet (this was a pre-dot-boom era remember) wrote:

It’s obvious that we are missing a real opportunity here to capitalize on the convergence of these trends. We need to immediately fund a start-up company to develop a package for writing Windows NT drivers in Java. THINK of it! We could have processor architecture independent device drivers that don’t even need to be recompiled in order to support X86, PPC, and Alpha machines! Amazing! We could create a visual driver development environment, complete with cute animated assistants. And, the drivers could probably have a visual component to them, so you could actually see your toaster-oven driver doing its work. Cool! THEN we could all be challenged, and have fun, and get rich at the same time. Wow! Why didn’t I think of this before?

It would be nice if we could see Peter’s views on the subject 12 years later.

The story of a lost manuscript

In “The development of social network analysis” (for which I have blogged too) Linton C. Freeman, among other things, tracks the efforts of different scientists to lay a mathematical foundation for SNA. For two such efforts he writes:

“both Fararo (circa 1964) and I separately set out to specify the common mathmatical properties of all these seemingly different studies. Fararo circulated but never published his paper. Mine was presented several times and eventually published, but not until twenty-five years later.”

The unpublished manuscript in question was entitled “Theory of Webs and Social Systems Data“. I contacted Professor Fararo for the unpublished manuscript. He told me that he had lost his copy and that I might be lucky by asking Professor Freeman, which I did. When I contacted Professor Freeman he was away from home, but promised to look for it. Indeed about a week later he found the manuscript, had it scanned and emailed it to me. Like I told to my wife who is an archaeologist, I think this is what it feels when they (archaeologists) make a discovery.

Prior to writing this blog post, I told this story to two friends of mine. Funnily enough they asked me the same question:

– Name one Greek Professor who (a) would answer to your email and (b) would go into all that effort to locate something written circa 1965-1966 and send it to you.

This humble blog post stands to publicly thank both Professors for their kind replies and help.

Update: After getting permission, I uploaded the document on Scribd.

re: The Humbling Power of P v NP

Some engineer out there has solved P=NP and it's locked up in an electric eggbeater calibration routine. For every 0x5f375a86 we learn about, there are thousands we never see.

In “The Humbling Power of P v NP“, Lance Fortnow urges theorists to try and solve P v NP “not because you will succeed but because you will fail” . This is the Kobayashi Maru character test for theorists it seems.

So what about non-theorists?

My answer is: So what if a problem is NP-complete? Does this mean that we are going to use that fact as an excuse to not solve it, or present a lousy hack as a solution? Or do people think that such problems do not come along the way of “a real professional”? They do, but theorists are trained to recognize them when they see them.

Just like theorists then, “practical computer people” must try to solve (using whatever tool they see fit) an NP-complete problem (like the TSP for example). Not because they will solve it optimally, but because there will always be a better solution. And by seeking it and understanding that “computation is a nasty beast” they will become better programmers professionals.

Update: You should also read:

The development of social network analysis

As promised, I finished reading “The development of social network analysis“. The book, written by Linton C. Freeman follows the development of the field from pre-Moreno times and the introduction of structural thought into social studies up to the late 90s. According to the book cover it is based on the Keynote Lecture that Freeman gave in April 2000 at the twentieth annual meeting of the INSNA.

The study of social structure has come of age

This is the last sentence of the book. Before reaching it, Freeman takes us on a journey that roughly begins with the works of Auguste Comte who apparently planted the first seeds of structural thought. Since then the field of structural thought has been restarted a number of times, and for a variety of reasons, among them being megalomania, shift of interest, interdepartmental politics and job security, main scene politics (like the Jenner committee that essentially ended a whole group).

A whole chapter is devoted to the life of Jacob Levy Moreno, who many think of as the father of the field, although it is later shown that there were earlier studies with similar aims and results and that the systematic approach and development of his ideas is most probably owed to the work of Helen Jennings and Paul Lazarsfeld.

All the pioneers and heroes of SNA parade through the book, the flow of names and their interrelations is so vast that half way through the book I regretted not taking notes of the names and their relations in order to produce something like the TCS genealogy coupled with some visualization. Luckily, in page 131 such a pruned graph is presented by the author.

Professor Freeman characterizes social network analysis as an approach that involves four defining properties:

  1. It involves the intuition that links among social actors are important.
  2. It is based on the collection and analysis of data that record social relations that link actors.
  3. It draws heavily on graphic imagery to reveal and display the patterning of those links.
  4. It develops mathematical and computational models to describe and explain those patterns.

All the efforts of structural thought (almost all of them lacking combination of all four characteristics) are presented, most of them being in USA with a few in Europe, up until the great restart of the discipline by Harrison White and his team at Harvard. The central role that Barry Wellman played in unifying all the approaches to the structural thought, through organizing meetings with key persons, forming the INSNA and the Connections newsletter is covered. Plus the EIES system (of interest to those who seek fragments of Internet history) is also covered at some extent, showing the role that technology can play in forming both a discipline and (human) networks.

Continue reading “The development of social network analysis”

Asking Questions…

We have seen that nobody should be afraid to ask a question. One of the first lessons I got from the USENET is that “Silly questions are the ones never asked”.

Today’s snail mail included the latest issue of NT Insider where Peter Viscarola in his column (Peter Pontificates) deals with the whole “asking a question” issue again:

Being a noob excuses stupidity. In fact, being a noob totaly means asking stupid questions. However, being a noob does not excuse lack of engineering discipline. As an engineer, I simply cannot understand how a fellow engineer can ask a question without at least attempting to put their question in its proper context.

How rightfully said.