Update, 3 January 2007: Apple knew about this and had it filed under Bug ID# 3725615.
Summary:
Setting the "Image Size" for a Mail attachment in a Mail compose window can modify the file names of non-image attachments.
Steps to Reproduce:
1. Start Mail
2. Open a New Message window
3. Attach a PDF file
4. Attach a JPEG image
5. Select the "Image Size" select widget in the lower right of the compose window and change the setting to "small" or "medium".
6. Observe that the PDF attachment's filename has changed to '.jpg' instead of the correct value of '.pdf'.
Expected Results:
Changing the image appearance in a mail message should only impact image attachments. Further, no filenames should be changed when changing the image size since that action should only change the message view, not the underlying content.
Actual Results:
The filename of the PDF attachment was changed, although the MIME type was not. The changed filename promulgated to the sent message.
Regression:
Tested on Mail serision 2.1 (752/752.2) and 2.1.1 (752.3). The problem exists on both versions.
Musings on books, the near future, the process of writing, the Semantic Web, the origins of agriculture, evolutionary meme theories, the venture capital process and the occasional political rant; not necessarily in that order. See my books at http://hyland-wood.org.
Friday, December 29, 2006
Wednesday, December 27, 2006
Wall Building
This should sound immediately familiar to anyone living around Washington, DC, this year:
"Court intriques intensified, leading to a near total paralysis of policy. Frontier policy became a political football passed between court factions, each mouthing angry patriotic platitudes, each vying to win favour with unpredictable emperors who sat, cloistered in the Forbidden City, angrily nuturing a futile sense of superiority over the foriegners snapping at the frontier.
Against this backdrop of racist arrogance, military incompetence and murderous factionalism, only one course of action was beginning to look both defensively viable and psychologically satisfying: wall-building."1
The year was 1457 and the place was Ming China. These days we build our own walls. Sadly, they are likely to do us as much good as they did the Ming. That is, of course, none at all.
--
1. Juia Lovell, The Great Wall; China Against the World 1000 B.C. - A.D. 2000, Grove Press, New York, 2006, pp. 201
"Court intriques intensified, leading to a near total paralysis of policy. Frontier policy became a political football passed between court factions, each mouthing angry patriotic platitudes, each vying to win favour with unpredictable emperors who sat, cloistered in the Forbidden City, angrily nuturing a futile sense of superiority over the foriegners snapping at the frontier.
Against this backdrop of racist arrogance, military incompetence and murderous factionalism, only one course of action was beginning to look both defensively viable and psychologically satisfying: wall-building."1
The year was 1457 and the place was Ming China. These days we build our own walls. Sadly, they are likely to do us as much good as they did the Ming. That is, of course, none at all.
--
1. Juia Lovell, The Great Wall; China Against the World 1000 B.C. - A.D. 2000, Grove Press, New York, 2006, pp. 201
Tuesday, December 26, 2006
The Tiniest Terrorist
On Tuesday, 19 December, I found myself traveling again, this time to Florida to visit with my parents and in-laws. We flew from Washington Dulles. Pre-holiday traffic was light. One quarter of the seats on the plane were unoccupied - the first time I have seen that in years.
B and I were traveling with our six-year-old daughter. Our son was staying with a friend for the short trip to avoid missing his school Solstice festival. We grabbed some lunch at a terminal burrito stand (err, a burrito stand in the terminal - one must be careful with ordering adjectives) while waiting to board. An unexpected movement caught our eyes. An electric cart belonging to the airport police zipped across the terminal at high speed with a three-year-old boy at the helm and a frantic mother giving chase.
The cart smashed into a row of chairs and took out a brochure stand at about fifteen miles per hour. People scattered left, right and over. The row of chairs - connected as they are in groups of four - broke, bent and slid into the next row, trapping two elderly ladies in their seats.
The recalcitrant cop finally arrived on the scene. He gawked and proceeded to do, umm, nothing. A crowd gathered instantly. Not one of them attempted to help the trapped women.
Now, I get on a soap box about this. Nothing, and I mean nothing, provides me with such burning, passionate feelings of disdain than a gawking, ineffectual crowd in times of crisis. I stand appalled at the ineptitude of my fellow humans, rubber neckers all. You would think that in the post-9/11 United States, in a major international airport, with multiple military and TSA personnel passing by, with a policeman present on the scene that someone in charge would be able to right such a minor situation immediately. Sadly, it is not so.
I rendered first aid and directed the moribund cop to call for paramedics. My wife took the mother and child in hand and calmed them down. The lady in the line of transit had a patella the size of a grapefruit in the thirty seconds it had taken me to reach her. Not good. Her leg was largely purple and almost assuredly broken. The other woman had a minor abrasion. I turned to a uniformed airport service worker and directed him to go to the restaurant and get some ice. "Ice?", he looked at me. "Who the hell are you?" The idiot! I gave him The Look. The Navy gave me The Look and I passed it on in turn. "I'm a medic. Get the damn ice.". He did. Lucky for him.
A doctor showed up on the scene about the time I had palpated and elevated their legs and cleared most of the debris. That cart hit hard. It sheared two metal uprights holding the back of a chair clean off. Each was about 40 by 5 millimeters of steel in cross section. The chair seats were steel and of similar thickness. They were just plates, covered with foam padding. One of those plates had rammed edge on straight into the tibia of the lady, just below the knee. Ouch!
We got the ice on both women. When I requested that the airport worker look for some towels, either paper or cloth, to put under the ice, he replied, "Oh, now you are really pushing it." I was too busy to kill him, so I found some myself. The doc was young and probably an intern. He had never treated anyone outside of a hospital. He hesitated when the women wanted to board their aircraft and left it to me to recommend that the shouldn't and to explain why. Then he agreed. The injured one was clearly in shock and not thinking straight yet. She was clearly distraught to think that she would miss Christmas with her family. We got both of them some water.
The paramedics finally arrived. I gave them the brief and turned it over to them. Then I did the only thing left to do - I faded into the crowd and went to my own gate with my family. To hang around was to be part of the lawsuit for the next six months of my life. The airport staff, from the cop who left the cart on and unattended to the bozo who topped out his skills getting ice, were clearly negligant. I didn't have anything to say to their lawyers that would help them any.
An airport staffer went looking for me later. I had my coat on, my glasses off and my phone in an ear. My wife waved him away and refused to speak English. He went away and we boarded our flight. I have no time for organizations who only get efficient when it becomes clear that they are guilty and about to be sued. I hope that old lady's retirement account gets a nice boost. She deserves it. Throughout the entire ordeal, she was polite, orderly and competant, in spite of being in a great deal of pain.
Can you imagine the chaos that would ensue if a terrorist hit such a facility? These people couldn't handle an errant three-year-old!
B and I were traveling with our six-year-old daughter. Our son was staying with a friend for the short trip to avoid missing his school Solstice festival. We grabbed some lunch at a terminal burrito stand (err, a burrito stand in the terminal - one must be careful with ordering adjectives) while waiting to board. An unexpected movement caught our eyes. An electric cart belonging to the airport police zipped across the terminal at high speed with a three-year-old boy at the helm and a frantic mother giving chase.
The cart smashed into a row of chairs and took out a brochure stand at about fifteen miles per hour. People scattered left, right and over. The row of chairs - connected as they are in groups of four - broke, bent and slid into the next row, trapping two elderly ladies in their seats.
The recalcitrant cop finally arrived on the scene. He gawked and proceeded to do, umm, nothing. A crowd gathered instantly. Not one of them attempted to help the trapped women.
Now, I get on a soap box about this. Nothing, and I mean nothing, provides me with such burning, passionate feelings of disdain than a gawking, ineffectual crowd in times of crisis. I stand appalled at the ineptitude of my fellow humans, rubber neckers all. You would think that in the post-9/11 United States, in a major international airport, with multiple military and TSA personnel passing by, with a policeman present on the scene that someone in charge would be able to right such a minor situation immediately. Sadly, it is not so.
I rendered first aid and directed the moribund cop to call for paramedics. My wife took the mother and child in hand and calmed them down. The lady in the line of transit had a patella the size of a grapefruit in the thirty seconds it had taken me to reach her. Not good. Her leg was largely purple and almost assuredly broken. The other woman had a minor abrasion. I turned to a uniformed airport service worker and directed him to go to the restaurant and get some ice. "Ice?", he looked at me. "Who the hell are you?" The idiot! I gave him The Look. The Navy gave me The Look and I passed it on in turn. "I'm a medic. Get the damn ice.". He did. Lucky for him.
A doctor showed up on the scene about the time I had palpated and elevated their legs and cleared most of the debris. That cart hit hard. It sheared two metal uprights holding the back of a chair clean off. Each was about 40 by 5 millimeters of steel in cross section. The chair seats were steel and of similar thickness. They were just plates, covered with foam padding. One of those plates had rammed edge on straight into the tibia of the lady, just below the knee. Ouch!
We got the ice on both women. When I requested that the airport worker look for some towels, either paper or cloth, to put under the ice, he replied, "Oh, now you are really pushing it." I was too busy to kill him, so I found some myself. The doc was young and probably an intern. He had never treated anyone outside of a hospital. He hesitated when the women wanted to board their aircraft and left it to me to recommend that the shouldn't and to explain why. Then he agreed. The injured one was clearly in shock and not thinking straight yet. She was clearly distraught to think that she would miss Christmas with her family. We got both of them some water.
The paramedics finally arrived. I gave them the brief and turned it over to them. Then I did the only thing left to do - I faded into the crowd and went to my own gate with my family. To hang around was to be part of the lawsuit for the next six months of my life. The airport staff, from the cop who left the cart on and unattended to the bozo who topped out his skills getting ice, were clearly negligant. I didn't have anything to say to their lawyers that would help them any.
An airport staffer went looking for me later. I had my coat on, my glasses off and my phone in an ear. My wife waved him away and refused to speak English. He went away and we boarded our flight. I have no time for organizations who only get efficient when it becomes clear that they are guilty and about to be sued. I hope that old lady's retirement account gets a nice boost. She deserves it. Throughout the entire ordeal, she was polite, orderly and competant, in spite of being in a great deal of pain.
Can you imagine the chaos that would ensue if a terrorist hit such a facility? These people couldn't handle an errant three-year-old!
Monday, December 11, 2006
The Little Penguins of Phillip Island
I had the privilege of seeing one of nature's wonders first hand yesterday. So many people travel by air on Fridays and Saturdays that I was able to save US$600 by flying on a Sunday, thus giving me an unexpected Saturday in Melbourne. Tourist brochures offered a variety of trips into the Victorian countryside and I chose the nightly "Penguin Parade" on Phillip Island. Andrae decided to accompany me. I was excited at the prospect of seeing the world's smallest penguin species in the wild. The little penguins (Eudyptula minor) are called just that. They were once called fairy penguins in Australia, but use of that name is passing.
Phillip Island is two hours by bus from downtown Melbourne, East and South through former horse country turned into new suburbs. The bus picked us up at my hotel at 4:30 PM, circled through the city's other large hotels and stopped at the tour office for 15 minutes. Andrae and I did the natural thing for geeks; we spent 14 minutes at a nearby bookshop. Andrae spent most of the trip out reading my copy of Patrick Cleary's The Myth of Nations; The Medieval Origins of Europe while I got stuck into Terry Pratchett's hilarious and insightful Thud.
The island is dominated by human activity. A four-lane pylon-and-arch bridge traverses the bay and leads to an economy split by agriculture and tourism. Residents grow wheat and grapes, raise horses and cattle, mine sand, operate caravan parks and services businesses and do a bit of commercial fishing. The Penguin Parade used to ring the island's coast nightly before the coming of humans, but the twentieth century saw so much habitat destruction and interference with breeding and brooding that only a single colony at Summerland Peninsula remained by the 1980s. That colony was saved by the concerted efforts of the Penguin Parade staff and a bit of entrepreneurial eco-tourism. They even used tourism money to purchase land from an adjoining housing development to extend the reserve land. The population is slowly increasing.
Biologists have been counting and studying the penguins since the 1960s. A full-time biologist was funded in the 1990s and now runs the research effort. Counting is done manually, through binoculars from an observation tower. We were told that the penguins have so far defeated more technical and exacting techniques. There is also the issue of observational consistency. Using the same method of counting year after year ensures the numbers do not require complicated normalization.
A large welcome centre includes the critical food and gift shops, ticket sales and offices. A wide boardwalk leads to the sea past acres of penguin burrows. The boardwalk splits two thirds of the way down. The main path goes to the observation tower and two large viewing stands. A smaller walk leads to a separate and lower set of raised bleachers for those who were willing to pay a bit more. Andrae and I paid up to get closer to the penguins.
No cameras were allowed, although photos were naturally for sale in the gift shop. The penguins would be quite bothered by flashes and people, being people, cannot be trusted not to shock them into abandoning their eggs for the chance of a good shot. Rangers were on hand to police that decision and also the strict ban on littering. It seemed to work. I saw little trash and no flashes. The seating areas were lit with monochromatic yellow light that penguins cannot see. Humans found it comfortable, if a bit dim and odd in a way hard to explain. Penguins see only in the blue portion of the spectrum, so the yellow lights do not bother them.
The penguins come ashore after one or two days' hunting for juvenile fish and squid. They come ashore carefully after sunset. A few will surf ashore, and scuttle back to the sea at the first sign of trouble. A larger group will eventually gather near shore and make a break for land in a grouping aptly called a raft.
They really are small. An adult is about 33 centimeters (thirteen inches) tall and weighs just a kilogram.
They scramble over the seaweed, walk and hop up the sand dunes and head straight for their burrows. The burrows themselves are dug in firm sand, or in this area, some are in man-made boxes to encourage population growth. Penguins can live long lives and may change partners several times, but like to return to the same burrow if they can. The oldest penguin studied is over twenty years. That is quite an accomplishment given the average life span of six and a half years!
December is breeding season. We saw one mating pair and heard others. Coos of little chicks erupted as one parent returned from fishing to regurgitate food to them. Parents take turns feeding and protecting chicks until they are old enough to grow their own waterproof adult feathers and hunt for themselves. Babies have a short 35 day gestation and undergo very rapid growth. After only six weeks, the young can hunt in the open ocean. The young, full of energy, can go as far as Adelaide on their early trips! The average is more like 10-15 km in a day, but some penguins from this colony have been found as far away as Sydney.
Most will fish for a day and return, some may stay out for up to five days.
They hunt alone when in the open ocean. They must return to the colony to preen their feathers (and get them preened), socialize, mate, lay eggs, care for young. All of their eating is done at sea; they eat nothing on land.
Little penguins can dive as deep as 65 meters, as determined by depth gauges strapped to a few by the researchers. Average depth of a dive is about 10-15 meters, but diving depths have recently increased due to a virus is one of their main food sources. They breathe air, and so have to return to the surface after each dive. Dives can last as long as a couple of minutes, but average around 40 seconds.
These penguins, like most birds, have eyes on the sides of their heads. This is a common trait of land-based prey animals, but these little guys are hunters. They are also prey. They are vulnerable at sea to sharks, tiger seals and some seabirds, and on land to falcons and mammals such as foxes, dogs, cats. Seagulls won't eat them but, as scavengers, will sometimes attack them to make them vomit their food.
Phillip Island provides an opportunity for the general public to participate live in one of nature's dramatic events, close to a major city. Yet the experience is safe for the penguins, too. The experience is eco-tourism done right. One can see the full panoply of a species living, growing, feeding, breeding, surviving predation just as if David Atenborough were there narrating and yet they are safe and uneffected by our presence. The organization of the facility and constant watch by the rangers allow the species to thrive in spite of us. That is a beautiful thing. I highly recommend that anyone who can see the little penguins of Phillip Island, especially since all proceeds go to the operation of the not-for-profit foundation and its research program.
Phillip Island is two hours by bus from downtown Melbourne, East and South through former horse country turned into new suburbs. The bus picked us up at my hotel at 4:30 PM, circled through the city's other large hotels and stopped at the tour office for 15 minutes. Andrae and I did the natural thing for geeks; we spent 14 minutes at a nearby bookshop. Andrae spent most of the trip out reading my copy of Patrick Cleary's The Myth of Nations; The Medieval Origins of Europe while I got stuck into Terry Pratchett's hilarious and insightful Thud.
The island is dominated by human activity. A four-lane pylon-and-arch bridge traverses the bay and leads to an economy split by agriculture and tourism. Residents grow wheat and grapes, raise horses and cattle, mine sand, operate caravan parks and services businesses and do a bit of commercial fishing. The Penguin Parade used to ring the island's coast nightly before the coming of humans, but the twentieth century saw so much habitat destruction and interference with breeding and brooding that only a single colony at Summerland Peninsula remained by the 1980s. That colony was saved by the concerted efforts of the Penguin Parade staff and a bit of entrepreneurial eco-tourism. They even used tourism money to purchase land from an adjoining housing development to extend the reserve land. The population is slowly increasing.
Biologists have been counting and studying the penguins since the 1960s. A full-time biologist was funded in the 1990s and now runs the research effort. Counting is done manually, through binoculars from an observation tower. We were told that the penguins have so far defeated more technical and exacting techniques. There is also the issue of observational consistency. Using the same method of counting year after year ensures the numbers do not require complicated normalization.
A large welcome centre includes the critical food and gift shops, ticket sales and offices. A wide boardwalk leads to the sea past acres of penguin burrows. The boardwalk splits two thirds of the way down. The main path goes to the observation tower and two large viewing stands. A smaller walk leads to a separate and lower set of raised bleachers for those who were willing to pay a bit more. Andrae and I paid up to get closer to the penguins.
No cameras were allowed, although photos were naturally for sale in the gift shop. The penguins would be quite bothered by flashes and people, being people, cannot be trusted not to shock them into abandoning their eggs for the chance of a good shot. Rangers were on hand to police that decision and also the strict ban on littering. It seemed to work. I saw little trash and no flashes. The seating areas were lit with monochromatic yellow light that penguins cannot see. Humans found it comfortable, if a bit dim and odd in a way hard to explain. Penguins see only in the blue portion of the spectrum, so the yellow lights do not bother them.
The penguins come ashore after one or two days' hunting for juvenile fish and squid. They come ashore carefully after sunset. A few will surf ashore, and scuttle back to the sea at the first sign of trouble. A larger group will eventually gather near shore and make a break for land in a grouping aptly called a raft.
They really are small. An adult is about 33 centimeters (thirteen inches) tall and weighs just a kilogram.
They scramble over the seaweed, walk and hop up the sand dunes and head straight for their burrows. The burrows themselves are dug in firm sand, or in this area, some are in man-made boxes to encourage population growth. Penguins can live long lives and may change partners several times, but like to return to the same burrow if they can. The oldest penguin studied is over twenty years. That is quite an accomplishment given the average life span of six and a half years!
December is breeding season. We saw one mating pair and heard others. Coos of little chicks erupted as one parent returned from fishing to regurgitate food to them. Parents take turns feeding and protecting chicks until they are old enough to grow their own waterproof adult feathers and hunt for themselves. Babies have a short 35 day gestation and undergo very rapid growth. After only six weeks, the young can hunt in the open ocean. The young, full of energy, can go as far as Adelaide on their early trips! The average is more like 10-15 km in a day, but some penguins from this colony have been found as far away as Sydney.
Most will fish for a day and return, some may stay out for up to five days.
They hunt alone when in the open ocean. They must return to the colony to preen their feathers (and get them preened), socialize, mate, lay eggs, care for young. All of their eating is done at sea; they eat nothing on land.
Little penguins can dive as deep as 65 meters, as determined by depth gauges strapped to a few by the researchers. Average depth of a dive is about 10-15 meters, but diving depths have recently increased due to a virus is one of their main food sources. They breathe air, and so have to return to the surface after each dive. Dives can last as long as a couple of minutes, but average around 40 seconds.
These penguins, like most birds, have eyes on the sides of their heads. This is a common trait of land-based prey animals, but these little guys are hunters. They are also prey. They are vulnerable at sea to sharks, tiger seals and some seabirds, and on land to falcons and mammals such as foxes, dogs, cats. Seagulls won't eat them but, as scavengers, will sometimes attack them to make them vomit their food.
Phillip Island provides an opportunity for the general public to participate live in one of nature's dramatic events, close to a major city. Yet the experience is safe for the penguins, too. The experience is eco-tourism done right. One can see the full panoply of a species living, growing, feeding, breeding, surviving predation just as if David Atenborough were there narrating and yet they are safe and uneffected by our presence. The organization of the facility and constant watch by the rangers allow the species to thrive in spite of us. That is a beautiful thing. I highly recommend that anyone who can see the little penguins of Phillip Island, especially since all proceeds go to the operation of the not-for-profit foundation and its research program.
Friday, December 08, 2006
OSDC 2006, Melbourne Australia
Day 1
I attended this year's Open Source Developers Conference in Melbourne, Australia, from 6-8 December 2006, after a week in Brisbane at The University of Queensland to research and use the library.
The fabulous exhibition of aerial photos by artist Yann Arthus-Bertrand was in Melbourne and set up by the Yarra River near the centrally located Flinders Street Station. The Internet version is good and I recommend viewing it, but it doesn't compare with interacting with a live crowd. I had the good fortune to see this exhibit a couple of years ago while in Bristol, England.
Randal Swartz gave the first keynote. He presented a tried and true talk regarding his position on Open Source licenses. He is a BSD/Artistic adherent, compared with Richard Stallman's insistence on the GPL. Randal and I discussed this in some detail after dinner and he saw my point regarding the desire to keep free code free - while being pragmatic about the need to license looser than the GPL to encourage use. In return, I freely acknowledge his point that very unrestrictive licenses encourage the wider use of Open Source software. It was a pleasure to discuss this with someone who both knew the licenses and could have a discussion about it without getting religious and emotional.
On the other hand, Randal did get emotional when I asked him about Perl 6 and Parrot ;) He made the fine point that Perl 6 will continue to scare him until a real release date becomes guessable. He needs to prepare training material (and pay for it) in advance, and that makes it a big deal for him.
My talk on RESTful Software Development and Maintenance went quite well and I was pleased to see that people were interested. It was a bit risky discussing pure future-oriented research at a (pragmatic) Open Source conference, but I'm glad I did it. Lots of interesting discussions ensued, fostered by breaking the mould a bit. I changed the talk at the last minute to include an longer introduction on why software maintenance is so hard, primarily to see whether it resonated with people. It did, and I plan to blog a bit of it as I get my head around it better.
Dr. Damian Conway presented his amazingly cool The Da Vinci Codebase (see the O'Reilly page on this here). If you get the chance, it is not to be missed.
Paul Fenwick of Perl Training Australia provided some entertainment on the train ride back to Flinders Station by showing the World of Warcraft version of the song The Internet is for Porn on his laptop at the highest volume. Reactions from other passengers were, umm, mixed. Several (male) commuters watched avidly while a group of Sikhs went stoic. Paul's girlfriend put up with it, while suggesting headphones occasionally.
Day 2
The second day of the conference was not tailored directly to my interests, but I took the opportunity to learn about other areas. The recent advances on Python's ctypes library are very useful and will, upon release, allow python programmers to have wonderfully powerful access to OS and hardware. I also learned a bit about the Mono project, the Boo scripting language for Microsoft CLI/Mono and attended a cool talk on the Australian Synchrotron and its use of Open Source software. It is worth noting that the control systems engineer from the synchrotron had no idea what licenses his free software was under because he was "not interested in I.P." I consider that sort of attitude dangerous to the entire Open Source software movement. Very little software is actually in the public domain due to implicit copyright rules, so licenses are the only protection for Open Source users and developers alike.
The highlights of the day were the incredibly high quality lightening talks. These included an understandable five-minute description of the RSA algorithm, a quick-and-dirty Catalyst application and some useful discussion on community building.
Andrae Muys gave a five-minute lightening talk on Mulgara. He presented (in five minutes!) his theory that RDF data stores exactly match the relational fifth-normal form, where each table contains exactly one data element and some number of foreign keys. He used C.J. Date's classic parts-suppliers database description and extended it with optional data. It was a good explanation of Mulgara's sweet spot.
At night, Andrae and I went to a funky little bar off of Little Lonsdale Street called the Horse Bazaar. The walls were covered with moving projections of city scenes and a live band played rapid Gypsy music. We were supposed to meet Randal and crew, but they never showed.
Day 3
Scott Penrose gave a keynote on his project Zaltana. Zaltana uses HTML tag attributes as hints for styling, including Javascript behavior. I haven't looked at it in detail yet, but it appears to be a similar idea to Behavior. Behavior uses CSS class attributes instead of namespace-id'd special attributes. Zaltana's goal is to wrap existing content with minimal effort with a new or custom style. I was more interested in it until I realized that the existing application has to be directly modified - I was hoping that some form of dynamic filter or preprocessing script could be used to map CSS class attributes to actions, but no.
Miles Byrne spoke on Programs are just words: Designing domain-specific languages in Ruby. I like Ruby, even though I have only coded a couple of toy applications in it so far. Miles gets the award for the best cat pictures in the conference. He gave an overview of the Ruby language, and then pushed on to many, many examples of dynamic languages written in Ruby. He showed examples for HTML parsing, Rake, ActiveRecord and several others.
Simon Raik-Allen presented on JasperReports (see the Sourceforge project for downloads). JasperReports is a Java library for generating nicely formatted SQL reports. It is LGPL licensed.
JasperReports has some commercial add-ons from JasperSoft, the initial developer. JasperSoft is busy creating an enhanced infrastructure layer focusing on the semantics. Given that they already have OLAP support and industry-sufficient GUIs and query- and report-building tools, the semantic mappings are about all that is left. I wouldn't want to be working on Crystal Reports this year and wonder how threatening this will be to data integration companies like Experian, Nimaya and Data Infinity.
Distributed Development
Martin Pool and Robert Collins reported on the Bazaar distributed version control system that does not have a centralized repository (GPL license). They have some very radical, but potentially useful, ideas. For example, they prefer to encourage many, many branches - even a branch per bug fix/feature addition. Merges to the mainline are also done early and often and become the equivalent to an update. They thus have some tools, such as reports, to manage and view the status of all those branches, not all of which have been released as OSS yet (although at least one is available as a free service on LaunchPad). Bazaar is for use specifically by distributed development teams and is used by Ubuntu Linux, a sizable project. That makes it particularly interesting to me since I am working on software maintenance for large and distributed teams.
Martin noted that an anti-pattern has developed in Bazaar use; some developers tend to put all of their work in their own branch instead of one branch per feature. Robert also gave a number of good reasons for having a main line, or trunk.
The fear of forking is often thrown up as a reason not to do version control the Bazaar way. Martin and Robert suggest that good distributed version control also makes potential reintegration easier.
Merging (especially often) can have some pitfalls. Adam Kennedy asked particularly about code conflicts. He wanted to know in advance of a merge whether a merge would be clean. Martin said that they are thinking about this, but the solution for the moment is to just ensure that the main line is always clean.
Andrew Bennetts carried on this theme after lunch. He talked about using Bazaar and Patch Queue Manager to ensure that tests always pass on the main line. The only way to ensure the main line is safe, of course, is to also use Patch Queue Manager (or some similar systems) to manage dependencies.
Andrew suggested that code reviews are more important for distributed teams than for teams who can informally and routinely overhear each others' conversations. This is an interesting thought because it begins to sound like a methodological requirement for larger OSS systems (that have historically not operated that way, possibly due to size). VOIP is also making distributed pair programming more affordable, especially for OSS projects.
The lightening talks were awesome, again. I am suddenly thinking about a conference format that starts with a series of lightening talks so that individuals can have a basis for choosing which sessions to attend.
Richard Jones showed the Selenium IDE, a test suite IDE for Firefox. Selenium tests themselves are cross-browser. There are heaps of (poorly implemented) test suites for browsers, but this one seems worth a look. It is also OSS under an Apache 2.0 license. The addition of an OSS IDE to this market makes it very easy to construct tests.
A talk was given on SELinux, when what I really needed was a talk on chcon :)
A wild talk was from a guy who implemented Lisp without (most) parentheses. It was actually readable! Kinda cool...
Jon Oxer proposed a really simple, but complete, language he called OSDcLang (actually yesterday). Today, Paul Fenwick introduced the ACME::OSDC perl module to compile it. Scary. Cool, but scary.
There were too many lightening talks to really capture. One guy proposed that OSS apps have source code inspection/editing access in the GUIs. Another introduced a perl module he just wrote and uploaded to simplify for/else syntax.
Andrae and I had a discussion with Adam Kennedy regarding spam prevention. I won't steal his thunder, but I am going to introduce him to friends at Yahoo! who are constantly looking for better anti-spam techniques. Adam is great at solving general problems and categorizes well between people and automation. I have some hopes for this one. I thought I had a solution last week, but it was shot down upon more detailed review.
I learned a lot about Perl internals after dinner by listening to Randal, Adam, Paul, Andrae and some others discussing Adam's attempt to make a perl parser. It turns out that nothing, not even perl, parses perl. There is no grammer, and there are places in the code where perl actually makes probabilistic guesses about how a token should be interpreted based on following information. Throw in dynamic language extensions and you start to see where this is going. Adam was trying to parse perl entirely in order to provide an API for editors to use. He still has some hope, but I was surprised at how hard it was.
Overall, it was a great conference. I look forward to coming next year, hopefully to Hobart, Tasmania.
Sunday, November 26, 2006
Requiem for a Lost Soul
My brother, you had it all.
You were so smart, so strong, so handsome.
You took our father's name and made me jealous.
My brother, your skills stunned us.
You were a wonderful musician,
a skilled linguist, a clear thinker, a nice man.
My brother, why did you not see your own worth?
Why did you need others to validate you?
Why did you need others to force you to lead?
My brother, you loved and were loved
and yet it was not enough to save you.
Your poison of choice was too strong.
My brother, now you are dead
and the world is a happier place for it.
Not one of us anticipated that.
You were so smart, so strong, so handsome.
You took our father's name and made me jealous.
My brother, your skills stunned us.
You were a wonderful musician,
a skilled linguist, a clear thinker, a nice man.
My brother, why did you not see your own worth?
Why did you need others to validate you?
Why did you need others to force you to lead?
My brother, you loved and were loved
and yet it was not enough to save you.
Your poison of choice was too strong.
My brother, now you are dead
and the world is a happier place for it.
Not one of us anticipated that.
Monday, November 13, 2006
Sun Makes a Really Great Mess
In a shock move, Sun Microsystems, every geek's favorite non-profit corporation, released Java ME and SE today under - get this - the GNU General Public License version 2.
Do they have any idea what they did to the industry? I don't think so. Sun seems to be claiming that Java programs which run on a GPL'd Java Virtual Machine are not "derivative works" of the Java language.
A derivative work in the GPL is defined as it is under copyright law, namely, "a work containing the Program or a portion of it, either verbatim or with modifications and/or translated into another language."
The question is whether a Java program will be considered a derivative work of Java (such as when you extend Java.lang.Object or use reflection) by any court, anywhere, under any nation's copyright law. That it will happen somewhere seems likely to me and incredibly dangerous to the Java industry.
Why is this such a problem? Because the GPL says, "But when you distribute the same sections as part of a whole which is a work based on the Program, the distribution of the whole must be on the terms of this License, whose permissions for other licensees extend to the entire whole, and thus to each and every part regardless of who wrote it." This is the "viral nature" of the GPL and the heart of the debate that will dominate the blogosphere and the industry news media for the immediate future.
I think Sun has just let a very powerful Pandora out of the box.
Do they have any idea what they did to the industry? I don't think so. Sun seems to be claiming that Java programs which run on a GPL'd Java Virtual Machine are not "derivative works" of the Java language.
A derivative work in the GPL is defined as it is under copyright law, namely, "a work containing the Program or a portion of it, either verbatim or with modifications and/or translated into another language."
The question is whether a Java program will be considered a derivative work of Java (such as when you extend Java.lang.Object or use reflection) by any court, anywhere, under any nation's copyright law. That it will happen somewhere seems likely to me and incredibly dangerous to the Java industry.
Why is this such a problem? Because the GPL says, "But when you distribute the same sections as part of a whole which is a work based on the Program, the distribution of the whole must be on the terms of this License, whose permissions for other licensees extend to the entire whole, and thus to each and every part regardless of who wrote it." This is the "viral nature" of the GPL and the heart of the debate that will dominate the blogosphere and the industry news media for the immediate future.
I think Sun has just let a very powerful Pandora out of the box.
Thursday, November 09, 2006
International Semantic Web Conference (ISWC) 2006 DAY 3
Thursday, 9 November 2006
I attended the track this morning on Applications of SW Technologies with Lessons Learned, including these papers:
Crawling and Indexing Semantic Web Data (Andreas Harth, Juergen Umbrich, Stefan Decker),
Using Ontologies for Extracting Product Features from Web Pages (Wolfgang Holzinger, Bernhard Kruepl, Marcus Herzog) and Characterizing the Semantic Web on the Web (Li Ding, Tim Finin).
I asked Andraes Harth and Li Dong (Swoogle) about indexing RDFa content and confirmed my opinion regarding its difficulty. Neither project currently indexes RDFa documents for the simple reason that they have no way to identify RDFa content without parsing every XHTML document they come across. The cost of doing that is too high.
I spoke with DanC and Ivan Herman about this at some length, but nobody seems to know what to do about it. Do you add an in-document identifier for RDFa content? If so, you lose a critical RDFa feature: the ability to cut-and-paste sections of content without losing machine readability. Do you just point to RDFa compatible documents from other documents in such a way that search engines get the hint they need? Swoogle would be fine with that, but it doesn't address how RDFa documents are consumed in a browser by the general public. Perhaps the answer is, as Steve Harris would have it, that your browser should just parse a document locally to see if it contains any triples of interest to you. It doesn't address global searching, but many people seem willing to cede that to those willing to parse the documents, like Google.
The W3C's RDF-in-XHTML Task Force, which has recently moved to the Semantic Web Deployment Working Group, has discussed this at length and not come up with an answer. I don't have one myself.
I attended the track this morning on Applications of SW Technologies with Lessons Learned, including these papers:
Crawling and Indexing Semantic Web Data (Andreas Harth, Juergen Umbrich, Stefan Decker),
Using Ontologies for Extracting Product Features from Web Pages (Wolfgang Holzinger, Bernhard Kruepl, Marcus Herzog) and Characterizing the Semantic Web on the Web (Li Ding, Tim Finin).
I asked Andraes Harth and Li Dong (Swoogle) about indexing RDFa content and confirmed my opinion regarding its difficulty. Neither project currently indexes RDFa documents for the simple reason that they have no way to identify RDFa content without parsing every XHTML document they come across. The cost of doing that is too high.
I spoke with DanC and Ivan Herman about this at some length, but nobody seems to know what to do about it. Do you add an in-document identifier for RDFa content? If so, you lose a critical RDFa feature: the ability to cut-and-paste sections of content without losing machine readability. Do you just point to RDFa compatible documents from other documents in such a way that search engines get the hint they need? Swoogle would be fine with that, but it doesn't address how RDFa documents are consumed in a browser by the general public. Perhaps the answer is, as Steve Harris would have it, that your browser should just parse a document locally to see if it contains any triples of interest to you. It doesn't address global searching, but many people seem willing to cede that to those willing to parse the documents, like Google.
The W3C's RDF-in-XHTML Task Force, which has recently moved to the Semantic Web Deployment Working Group, has discussed this at length and not come up with an answer. I don't have one myself.
Wednesday, November 08, 2006
International Semantic Web Conference (ISWC) 2006 DAY 2
Wednesday, 8 November 2006
Susie Stephens of Oracle presented her Industry Track paper Integrating Enterprise Data with Semantic Technologies. Oracle 10g Release 2 embeds a SPARQL-like graph pattern into SQL (that is industry speak for "We don't support SPARQL, but please don't penalize us for it"). There is apparently no funded project within Oracle to support SPARQL. A forward chaining rules engine, equivalent to Mulgara's Krule, support RDFS and user-defined rules.
She said that Oracle has put up to 1 billion RDF statements into Oracle. That tells me that we had better get onto funding Mulgara's XA2 next-generation data store in order to stay relevant in terms of scaling. However, she only showed numbers for query speeds up to 80 million triples.
Oracle's advantage is building on their mature database, which allows them access to existing features, such as encryption or scaling or clustering. A nice example of combining previous and new features was shown which combined a multimedia search with a term equivalence being given in RDF. Thus, a search for X-rays of "jaw" could pick up those tagged with the term "mandible".
Oracle 11g will support some level of OWL, reportedly something "similar" to OWL Lite but modified to reduce the ability to perform computationally intensive searches.
Oracle seems to have made it easy to get started with Semantic Web technologies. That is a good and positive thing for everyone in the industry. Her comments regarding modifications to the existing URI-based identifiers to allow use of existing unique identification schemes concerned me, though. Semantic Web technologies without URIs would be a huge step backward, even taking into account the obvious short term gains. Better to facilitate the mapping of existing identifiers to URIs.
Susie also mentioned that webMethods has announced RDF and OWL support in the new version of Fabric. Indeed, this press release from webMethods says that Fabric uses RDF and OWL. Specifically, "the library automatically learns dependencies and relationships between IT assets."
Explaining Conclusions from Diverse Knowledge Sources (J William Murdock, Deborah McGuinness, Paulo Pinheiro da Silva, Chris Welty, David Ferrucci). Their Open Source framework, Unstructured Information Management Architecture in Java, seems worth a look. The goal is to produce good search results when dealing with a mixture of structured and unstructured content. The example in the talk involved some textual data extraction coupled with some theorem proving.
There was an active Semantic Web Services track at the conference. This is hardly surprising, since UDDI is so badly broken and Web Services are left without a reasonable way to perform composition and discovery. Yet Semantic Web Services still seem to be mired in academia. Perhaps the industry will start to see the light if Oracle and webMethods successfully deploy useful semantic tools to the community.
A Software Engineering Approach to Design and Development of Semantic Web Service Applications (Marco Brambilla, Irene Celino, Stefano Ceri, Dario Cerizza, Emanuele Della Valle, Federico Michele Facca) described a top-down approach toward annotating Semantic Web Services using a Spiral development model. This was the first time I have seen anyone actually use WebML. I am going to have to look at that, especially since there is a tool which implements it. They also used WSML. The link provides an interesting summary of Web rules languages.
RS2D: Fast Adaptive Search for Semantic Web Services in Unstructured P2P Networks (Matthias Klusch, Ulrich Basters) presented a model for Open World searching of semantic services. They introduced concepts like Semantic Gain, Semantic Loss and a Baysian-derived risk factor to judge the likelihood that peers would have something to add to an answer. The idea is to use machine learning to reduce gratuitous network communication when querying Semantic Web Services. The algorithm works well for unstructured, peer-to-peer networks without a single authoritative source for information. This was also a best paper nominee.
Web 2.0 panel: Tom Gruber of realtravel identified "Collective Intelligence" as the critical feature of Web 2.0 and the area where the Semantic Web can provide the most value. Truth and semistructured queries are the critical components. "Don't ask what the Web knows, ask what the World knows", by which he means, "ask what people know."
The problem with SemWeb apps seems to me to be that they are almost all closed world. We need some large-scale, open world applications. Simile or Tabulator are the closest I have seen and incredibly cool, but they are far from mainstream. We need to encourage the development of more SemWeb apps which pull data from the Web and publish data back to the Web. I have been as guilty as anyone else on this, but I promise to try to get better.
Tom Gruber suggests that any app to address this problem should explicitly allow others to mash up on top of it. That is an excellent point.
Patrick Stickler of Nokia and Marja-Riitta Koivunen of Annotea and I discussed the state of SPARQL at dinner. The lack of a simple syntactical means of performing negation is a real problem. Perhaps we can fix that before SPARQL becomes a W3C Recommendation. Perhaps too I should recover my notes on Mulgara/Kowari/TKS's EXCLUDE operator and it's relation to Jena's NOT. We don't need an RDF query language standard that is hard to use, hard to implement and has two different types of null...
Susie Stephens of Oracle presented her Industry Track paper Integrating Enterprise Data with Semantic Technologies. Oracle 10g Release 2 embeds a SPARQL-like graph pattern into SQL (that is industry speak for "We don't support SPARQL, but please don't penalize us for it"). There is apparently no funded project within Oracle to support SPARQL. A forward chaining rules engine, equivalent to Mulgara's Krule, support RDFS and user-defined rules.
She said that Oracle has put up to 1 billion RDF statements into Oracle. That tells me that we had better get onto funding Mulgara's XA2 next-generation data store in order to stay relevant in terms of scaling. However, she only showed numbers for query speeds up to 80 million triples.
Oracle's advantage is building on their mature database, which allows them access to existing features, such as encryption or scaling or clustering. A nice example of combining previous and new features was shown which combined a multimedia search with a term equivalence being given in RDF. Thus, a search for X-rays of "jaw" could pick up those tagged with the term "mandible".
Oracle 11g will support some level of OWL, reportedly something "similar" to OWL Lite but modified to reduce the ability to perform computationally intensive searches.
Oracle seems to have made it easy to get started with Semantic Web technologies. That is a good and positive thing for everyone in the industry. Her comments regarding modifications to the existing URI-based identifiers to allow use of existing unique identification schemes concerned me, though. Semantic Web technologies without URIs would be a huge step backward, even taking into account the obvious short term gains. Better to facilitate the mapping of existing identifiers to URIs.
Susie also mentioned that webMethods has announced RDF and OWL support in the new version of Fabric. Indeed, this press release from webMethods says that Fabric uses RDF and OWL. Specifically, "the library automatically learns dependencies and relationships between IT assets."
Explaining Conclusions from Diverse Knowledge Sources (J William Murdock, Deborah McGuinness, Paulo Pinheiro da Silva, Chris Welty, David Ferrucci). Their Open Source framework, Unstructured Information Management Architecture in Java, seems worth a look. The goal is to produce good search results when dealing with a mixture of structured and unstructured content. The example in the talk involved some textual data extraction coupled with some theorem proving.
There was an active Semantic Web Services track at the conference. This is hardly surprising, since UDDI is so badly broken and Web Services are left without a reasonable way to perform composition and discovery. Yet Semantic Web Services still seem to be mired in academia. Perhaps the industry will start to see the light if Oracle and webMethods successfully deploy useful semantic tools to the community.
A Software Engineering Approach to Design and Development of Semantic Web Service Applications (Marco Brambilla, Irene Celino, Stefano Ceri, Dario Cerizza, Emanuele Della Valle, Federico Michele Facca) described a top-down approach toward annotating Semantic Web Services using a Spiral development model. This was the first time I have seen anyone actually use WebML. I am going to have to look at that, especially since there is a tool which implements it. They also used WSML. The link provides an interesting summary of Web rules languages.
RS2D: Fast Adaptive Search for Semantic Web Services in Unstructured P2P Networks (Matthias Klusch, Ulrich Basters) presented a model for Open World searching of semantic services. They introduced concepts like Semantic Gain, Semantic Loss and a Baysian-derived risk factor to judge the likelihood that peers would have something to add to an answer. The idea is to use machine learning to reduce gratuitous network communication when querying Semantic Web Services. The algorithm works well for unstructured, peer-to-peer networks without a single authoritative source for information. This was also a best paper nominee.
Web 2.0 panel: Tom Gruber of realtravel identified "Collective Intelligence" as the critical feature of Web 2.0 and the area where the Semantic Web can provide the most value. Truth and semistructured queries are the critical components. "Don't ask what the Web knows, ask what the World knows", by which he means, "ask what people know."
The problem with SemWeb apps seems to me to be that they are almost all closed world. We need some large-scale, open world applications. Simile or Tabulator are the closest I have seen and incredibly cool, but they are far from mainstream. We need to encourage the development of more SemWeb apps which pull data from the Web and publish data back to the Web. I have been as guilty as anyone else on this, but I promise to try to get better.
Tom Gruber suggests that any app to address this problem should explicitly allow others to mash up on top of it. That is an excellent point.
Patrick Stickler of Nokia and Marja-Riitta Koivunen of Annotea and I discussed the state of SPARQL at dinner. The lack of a simple syntactical means of performing negation is a real problem. Perhaps we can fix that before SPARQL becomes a W3C Recommendation. Perhaps too I should recover my notes on Mulgara/Kowari/TKS's EXCLUDE operator and it's relation to Jena's NOT. We don't need an RDF query language standard that is hard to use, hard to implement and has two different types of null...
International Semantic Web Conference (ISWC) 2006 DAY 1
Tuesday, 7 November 2006
The first day of ISWC was intensely busy for me. I was only able to attend a single session due to the number of people I spoke with.
I had a lengthy and interesting conversation with Harry Halpin, co-chair of the W3C's GRDDL Working Group, regarding RDFA and GRDDL. I found myself representing RDFA, which I did to the best of my ability. Fortunately, I was able to recall the critical use case for RDFA over GRDDL; the requirement to support a cut-and-paste of a block of XHTML without losing information on how that block should be interpreted. We discussed at length RDFA's potential show stopper - the lack of explicit identification of RDFA content. That prohibits searching for documents which support RDFA and leaves open the question of whether one may rely on RDFA extraction from documents where there is no a priori knowledge that a source document complied with RDFA markup. I suggest that will kill RDFA in practice unless it is addressed.
Harry cheered Eric Miller's push for a "persistent URI" service for RDF identifiers, similar to the persistent URL service operated by OCLC at purl.org.
Harry and I were tutored by IBM's Chris Welty on the difference between RDF:Resource and OWL:Thing. I can never remember the difference. RDF:Resource includes its own language-specific features, OWL:Thing does not (in OWL DL), but they are the same in OWL Full. Thanks, Chris!
Chris has made a tremendous amount of progress as co-chair of the W3C's Rules Interchange Format (RIF) Working Group. The group has reportedly agreed that they will, in fact, produce a rules interchange format (hey, that was hard!), that they will define a core feature set based on positive Horne and that they will support an arbitrary number of non-Horne extensions via an extension mechanism. That level of early structure should allow the group to proceed without the factionalization that dogged the WebOnt group (producers of the OWL standards).
Years into Semantic Web development, we still need a coordinated location for good ontologies. Harry spoke to the guy who runs http://semanticweb.org/ about hosting mappings between Web 2.0 vocabularies and RDF and ontologies. The response was positive, but it still needs to happen. Unfortunately, I did not get his name.
Apparently there is still a need for a good geospatial ontology, even for simple agreed concepts like latitude and longitude. This has been a problem for years, especially within government circles and supporting organizations such as MITRE. Harry pointed me toward Harry Chen's blog entries here.
Harry and Norm Walsh have been working on an isomorphic mapping between vcard and RDF. Details are available on Norm's blog, although Harry told me there is a newer version which he promised to send me. This is a useful thing, especially as it makes use of some FOAF to make the mapping clean.
Steve Harris, of OWL Tiny fame, is now at UK identity protection startup Garlik. Garlik is currently operating in the UK only, but they are planning a US market entry next year. They ask their customers for personal information used to identify them to their banks and watch public and subscription databases to determine if others are using their identity. Naturally, this is done via an RDF graph. There certainly is a need for some kind of identity protection service in the US. USA Today ("McPaper") reported today that 8.9 million Americans (4% of the population) lose their identity each year and that is costs them an average of US$6,383.
I finally met Chimezie Ogbuji, now at Cleveland Clinic and formerly of ForeThought. He worked on the ForeSuite CMS, which uses RDF and XML databases to manage content. Interestingly, Chimezie is a fan of Daniel Krech. ForeSuite uses Daniel's rdflib! It is also using a GRDDL-like transform between XML documents to generate the RDF.
David Taowei Wang did a good job presenting A Survey of the Web Ontology Landscape (Taowei Wang, Bijan Parsia, Jim Hendler). Bijan now has a huge unkempt beard and a nineteenth century waxed mustache. He appears to be enjoying teaching at Manchester.
The most interesting paper I have seen in a while was Semantics and Complexity of SPARQL (Jorge A. Perez, Marcelo Arenas, Claudio Gutierrez). It is up for a best paper award and probably deserves it. It is great to see someone, even if not the W3C, providing a model theoretic for SPARQL. Unfortunately, the work does not yet cover entailments or bnodes, which are outstanding issues at the W3C.
I missed seeing OntoWiki - A Tool for Social, Semantic Collaboration (Sören Auer,Thomas Riechert, Sebastian Dietzold) because I was talking to Guus Schreiber, Chris Welty, Ivan Herman and Harry Halpin (again). I'll have to read it in the proceedings, though, because it looks interesting.
The poster session was well attended. Six posters from MINDSWAP were accepted, including the Semantic Web challenge entry. Unfortunately, three of them were down a long corridor in the lunch room and received very few visitors :( I was fortunate to get an excellent location with plenty of traffic and little noise from the band.
My poster, Enhancing Software Maintenance by using Semantic Web Techniques, reported on some research in progress. My purpose in submitting it to ISWC was to get initial responses to the research direction and gather some ideas for next steps. I was pleasantly surprised to receive some very positive feedback. There was a lot of interest from software engineers, especially those from the more pragmatic people, such as those from IBM, Accenture and SRI International. Software maintenance costs money and that makes a market.
There were several comments regarding the depth of my use of OWL-DL. I had created an ontology of software engineering concepts which was focused on Java for the prototype implementation. I had used OWL-DL in order to use SWOOP to ensure logical consistency. A good next step would be to represent the high-level constructs of other languages so that multi-language projects could be managed in the environment. This is necessary because different languages treat even basic concepts differently, such as the separation of abstract classes from interfaces or the existence of unimplemented method signatures. An OWL-DL ontology could readily map the like and disjoint components in the various languages and be used to infer inheritance relationships.
Dr. Kerry Taylor from CSIRO in Canberra stopped by. She knows my advisor Dave Carrington at UQ and was surprised to see him involved in SemWeb work. She confirmed the earlier comments regarding inferencing across language differences.
I must see similar project called FAMIX. FAMIX provides "a language-independent representation of object-oriented source code and is used ... as a basis for exchanging information about object-oriented software systems." Avi Bernstein of the University of Zurich told me about it and recommended that I discuss it with his collaborator, Harald Gall. This is what conferences are for.
The first day of ISWC was intensely busy for me. I was only able to attend a single session due to the number of people I spoke with.
I had a lengthy and interesting conversation with Harry Halpin, co-chair of the W3C's GRDDL Working Group, regarding RDFA and GRDDL. I found myself representing RDFA, which I did to the best of my ability. Fortunately, I was able to recall the critical use case for RDFA over GRDDL; the requirement to support a cut-and-paste of a block of XHTML without losing information on how that block should be interpreted. We discussed at length RDFA's potential show stopper - the lack of explicit identification of RDFA content. That prohibits searching for documents which support RDFA and leaves open the question of whether one may rely on RDFA extraction from documents where there is no a priori knowledge that a source document complied with RDFA markup. I suggest that will kill RDFA in practice unless it is addressed.
Harry cheered Eric Miller's push for a "persistent URI" service for RDF identifiers, similar to the persistent URL service operated by OCLC at purl.org.
Harry and I were tutored by IBM's Chris Welty on the difference between RDF:Resource and OWL:Thing. I can never remember the difference. RDF:Resource includes its own language-specific features, OWL:Thing does not (in OWL DL), but they are the same in OWL Full. Thanks, Chris!
Chris has made a tremendous amount of progress as co-chair of the W3C's Rules Interchange Format (RIF) Working Group. The group has reportedly agreed that they will, in fact, produce a rules interchange format (hey, that was hard!), that they will define a core feature set based on positive Horne and that they will support an arbitrary number of non-Horne extensions via an extension mechanism. That level of early structure should allow the group to proceed without the factionalization that dogged the WebOnt group (producers of the OWL standards).
Years into Semantic Web development, we still need a coordinated location for good ontologies. Harry spoke to the guy who runs http://semanticweb.org/ about hosting mappings between Web 2.0 vocabularies and RDF and ontologies. The response was positive, but it still needs to happen. Unfortunately, I did not get his name.
Apparently there is still a need for a good geospatial ontology, even for simple agreed concepts like latitude and longitude. This has been a problem for years, especially within government circles and supporting organizations such as MITRE. Harry pointed me toward Harry Chen's blog entries here.
Harry and Norm Walsh have been working on an isomorphic mapping between vcard and RDF. Details are available on Norm's blog, although Harry told me there is a newer version which he promised to send me. This is a useful thing, especially as it makes use of some FOAF to make the mapping clean.
Steve Harris, of OWL Tiny fame, is now at UK identity protection startup Garlik. Garlik is currently operating in the UK only, but they are planning a US market entry next year. They ask their customers for personal information used to identify them to their banks and watch public and subscription databases to determine if others are using their identity. Naturally, this is done via an RDF graph. There certainly is a need for some kind of identity protection service in the US. USA Today ("McPaper") reported today that 8.9 million Americans (4% of the population) lose their identity each year and that is costs them an average of US$6,383.
I finally met Chimezie Ogbuji, now at Cleveland Clinic and formerly of ForeThought. He worked on the ForeSuite CMS, which uses RDF and XML databases to manage content. Interestingly, Chimezie is a fan of Daniel Krech. ForeSuite uses Daniel's rdflib! It is also using a GRDDL-like transform between XML documents to generate the RDF.
David Taowei Wang did a good job presenting A Survey of the Web Ontology Landscape (Taowei Wang, Bijan Parsia, Jim Hendler). Bijan now has a huge unkempt beard and a nineteenth century waxed mustache. He appears to be enjoying teaching at Manchester.
The most interesting paper I have seen in a while was Semantics and Complexity of SPARQL (Jorge A. Perez, Marcelo Arenas, Claudio Gutierrez). It is up for a best paper award and probably deserves it. It is great to see someone, even if not the W3C, providing a model theoretic for SPARQL. Unfortunately, the work does not yet cover entailments or bnodes, which are outstanding issues at the W3C.
I missed seeing OntoWiki - A Tool for Social, Semantic Collaboration (Sören Auer,Thomas Riechert, Sebastian Dietzold) because I was talking to Guus Schreiber, Chris Welty, Ivan Herman and Harry Halpin (again). I'll have to read it in the proceedings, though, because it looks interesting.
The poster session was well attended. Six posters from MINDSWAP were accepted, including the Semantic Web challenge entry. Unfortunately, three of them were down a long corridor in the lunch room and received very few visitors :( I was fortunate to get an excellent location with plenty of traffic and little noise from the band.
My poster, Enhancing Software Maintenance by using Semantic Web Techniques, reported on some research in progress. My purpose in submitting it to ISWC was to get initial responses to the research direction and gather some ideas for next steps. I was pleasantly surprised to receive some very positive feedback. There was a lot of interest from software engineers, especially those from the more pragmatic people, such as those from IBM, Accenture and SRI International. Software maintenance costs money and that makes a market.
There were several comments regarding the depth of my use of OWL-DL. I had created an ontology of software engineering concepts which was focused on Java for the prototype implementation. I had used OWL-DL in order to use SWOOP to ensure logical consistency. A good next step would be to represent the high-level constructs of other languages so that multi-language projects could be managed in the environment. This is necessary because different languages treat even basic concepts differently, such as the separation of abstract classes from interfaces or the existence of unimplemented method signatures. An OWL-DL ontology could readily map the like and disjoint components in the various languages and be used to infer inheritance relationships.
Dr. Kerry Taylor from CSIRO in Canberra stopped by. She knows my advisor Dave Carrington at UQ and was surprised to see him involved in SemWeb work. She confirmed the earlier comments regarding inferencing across language differences.
I must see similar project called FAMIX. FAMIX provides "a language-independent representation of object-oriented source code and is used ... as a basis for exchanging information about object-oriented software systems." Avi Bernstein of the University of Zurich told me about it and recommended that I discuss it with his collaborator, Harald Gall. This is what conferences are for.
Monday, November 06, 2006
2006 Survey on Software Engineering Practices Closed
The 2006 Survey on Software Engineering Practices is now closed. 448 software engineers from 52 countries participated! Thanks very much to all who helped. I will post summary data from the survey as soon as I can.
Friday, November 03, 2006
Getting the World to Listen
Today's news was chock full of articles about a researcher in Canada and his colleagues in the UK and Germany who concluded the world's oceans would be fished out by 2048. What news! A little digging showed that the original article in the journal Science came out in August of 2005 - a year and a quarter ago. Why the wait until the world noticed?
It turns out that Drs. Boris Worm and Ransom Myers of Dalhousie University's Department of Biology got tired of nobody listening to their research on the loss of biodiversity in the world's oceans. They took matters into their own hands and made it easy for reporters to break the story. They prepared press releases and did most of the work for the reporters. The result was that the research was finally picked up by the Associated Press and syndicated widely.
Today, Google news reported 570 articles (!) telling the story.
It seems sad that researchers have to become experts on both science and marketing to get their message out, but there it is.
It turns out that Drs. Boris Worm and Ransom Myers of Dalhousie University's Department of Biology got tired of nobody listening to their research on the loss of biodiversity in the world's oceans. They took matters into their own hands and made it easy for reporters to break the story. They prepared press releases and did most of the work for the reporters. The result was that the research was finally picked up by the Associated Press and syndicated widely.
Today, Google news reported 570 articles (!) telling the story.
It seems sad that researchers have to become experts on both science and marketing to get their message out, but there it is.
Sunday, October 29, 2006
A Perspective on India
I have recently spent two weeks in Hyderabad and Aurangabad, India, from 8-21 October, 2006. My notes from the trip and the accompanying photos may be found at these links:
A perspective on India
Photos, Part 1
Photos, Part 2
Please comment, especially if you are Indian! India is so big, so complex, that I am sure I got some of it wrong. Please know that I honestly tried to get it right.
A perspective on India
Photos, Part 1
Photos, Part 2
Please comment, especially if you are Indian! India is so big, so complex, that I am sure I got some of it wrong. Please know that I honestly tried to get it right.
Saturday, October 07, 2006
First In-Flight Post!
I am writing this post from a Lufthansa flight, somewhere over Turkey, on my way to Hyderabad, India. Lufthansa now offers wireless 802.11 Internet connections on their flights via the Connexion service from Boeing. Lufthansa calls it "FlyNet". It generally seems to sell for USD $25 per flight or $9 per hour, but is free on this flight.
Is this cool, or what??
Is this cool, or what??
Friday, October 06, 2006
Survey on Software Engineering Practices
I am conducting a Web-based survey of software engineering practices as part of my Ph.D. research. Software engineers in several countries are expected to participate.
The survey questionnaire should only take between 5-8 minutes to complete. The questionnaire is available at the following URL:
http://www.itee.uq.edu.au/~dwood/sepsurvey.html
Only practicing software engineers or programmers should complete the questionnaire. Please pass this invitation to any practicing software engineers or programmers that you know. The more data the better :)
Survey respondents will be offered an opportunity to receive a summary of the survey results. Respondents wishing to see the summary results will asked to provide an email address separately from the questionnaire. The questionnaire is entirely anonymous.
Thank you in advance for participating. Please feel free to contact me directly with any questions or concerns. Contact details are given at the survey URL.
The survey questionnaire should only take between 5-8 minutes to complete. The questionnaire is available at the following URL:
http://www.itee.uq.edu.au/~dwood/sepsurvey.html
Only practicing software engineers or programmers should complete the questionnaire. Please pass this invitation to any practicing software engineers or programmers that you know. The more data the better :)
Survey respondents will be offered an opportunity to receive a summary of the survey results. Respondents wishing to see the summary results will asked to provide an email address separately from the questionnaire. The questionnaire is entirely anonymous.
Thank you in advance for participating. Please feel free to contact me directly with any questions or concerns. Contact details are given at the survey URL.
Friday, September 29, 2006
Semantic Web Best Practices and Deployment Working Group Now Closed
The W3C formally announced the closure of the Semantic Web Best Practices and Deployment Working Group today, which I co-chaired with Guus Schreiber. Guus, Ralph Swick and the members of the group deserve the vast majority of the credit for the group's results.
The Semantic Web Best Practices and Deployment Working Group was chartered to provide hands-on support for developers of Semantic Web applications in various forms, ranging from engineering guidelines, ontology / vocabulary repositories to educational material and demo applications. During its lifetime the group produced 6 W3C technical reports and 8 Working Drafts.
All published works will remain available on the working group's site.
The Semantic Web Best Practices and Deployment Working Group was chartered to provide hands-on support for developers of Semantic Web applications in various forms, ranging from engineering guidelines, ontology / vocabulary repositories to educational material and demo applications. During its lifetime the group produced 6 W3C technical reports and 8 Working Drafts.
All published works will remain available on the working group's site.
Thursday, September 28, 2006
Teaching at the University of Mary Washington
I am pleased to announce that I will be teaching a course next semester in the Computer Science Department at the University of Mary Washington next semester. Mary Washington is a small, traditional, liberal arts school located in Fredericksburg, Virgina - just three miles from my house.
The course is CPSC 104, The Internet: Technology, Information, and Issues. I am told to expect twenty students from various majors. The course fullfills one of the five "writing intensive" course requirements Mary Washington students need for graduation.
My thanks to Professor Ernie Ackermann for giving me this chance to build my academic resume.
The course is CPSC 104, The Internet: Technology, Information, and Issues. I am told to expect twenty students from various majors. The course fullfills one of the five "writing intensive" course requirements Mary Washington students need for graduation.
My thanks to Professor Ernie Ackermann for giving me this chance to build my academic resume.
Tuesday, September 26, 2006
OCLC Compromised by the Patriot Act?
At the beginning of this year, I stated my intention to request banned books from the inter-library loan system in an attempt to determine whether OCLC was being compromised by the federal government. In short, is Big Brother really watching what we read? I sincerely hoped that the answer was "no". Unfortunately, I cannot report that.
I ordered Sayed Qutb's Milestones and the US Army's Improvised Munitions Handbook from the Ashburn, Virginia Public Library. The result was that I received Milestones after a long wait (about two months - which is much longer than normal) and never did receive the Improvised Munitions Handbook after waiting over eight months. Since it and other related books are listed in the inter-library loan system, I can only conclude that OCLC has been stopped by their own supervisors or the FBI from providing the book.
My last post on this issue was Taking a Stand on Banned Books, Part 5.
I ordered Sayed Qutb's Milestones and the US Army's Improvised Munitions Handbook from the Ashburn, Virginia Public Library. The result was that I received Milestones after a long wait (about two months - which is much longer than normal) and never did receive the Improvised Munitions Handbook after waiting over eight months. Since it and other related books are listed in the inter-library loan system, I can only conclude that OCLC has been stopped by their own supervisors or the FBI from providing the book.
My last post on this issue was Taking a Stand on Banned Books, Part 5.
Time Ontology in OWL
The W3C Semantic Web Best Practices and Deployment Working Group (SWBP&D) has received permission to publish the First Public Working Draft of Time Ontology in OWL. No further work on this document is expected.
This is likely to be the last official act of the SWBP&D working group, which will be closed shortly as it is operating longer than its charter allowed. Many of its ongoing activities will be taken up by the newly formed Semantic Web Deployment Working Group.
This is likely to be the last official act of the SWBP&D working group, which will be closed shortly as it is operating longer than its charter allowed. Many of its ongoing activities will be taken up by the newly formed Semantic Web Deployment Working Group.
Does Story Telling Explain Religion?
Bill Moyers asked a great question of Margaret Atwood in his interview series On Faith and Reason. He noted that religious people point to a god of some form as the cause for the rain falling and the flowers blooming. Specifically, he quoted a hymn by Franz Josef Haydn putting the issue quite poetically. A scientist, he said, would not have used poetry to elegantly describe these things, but would simply offer a physical explanation. Then he asked, "We need the poetry, don't we? Are we hard wired to seek that kind of meaning in life, that only poetry, religion and writing can give us?" Atwood intelligently replied in the affirmative, noting that we are a "symbol-making" race. "We seem to need, create and exist within structures of symbolism of one kind or another."
It occurs to me that our ability to communicate verbally with each other is a fundamental part of our humanity. We tell stories in order to communicate with the most complicated entity in our world - another human being. We are hard-wired to recognize faces and to view the world through the lens of the shared experiences that we collect via stories. Stories make, use and foster analogous thinking. As we tell stories which use analogies, we invent a world view which, poetically, describes the world around us. That is as close to a definition of religion as I have been able to create.
Does that make sense? That is my story, anyway, and I suppose I'll stick to it until someone tells me a better one.
It occurs to me that our ability to communicate verbally with each other is a fundamental part of our humanity. We tell stories in order to communicate with the most complicated entity in our world - another human being. We are hard-wired to recognize faces and to view the world through the lens of the shared experiences that we collect via stories. Stories make, use and foster analogous thinking. As we tell stories which use analogies, we invent a world view which, poetically, describes the world around us. That is as close to a definition of religion as I have been able to create.
Does that make sense? That is my story, anyway, and I suppose I'll stick to it until someone tells me a better one.
The Death of the Human-Centric URL?
I have started to notice that conference presenters are leaning away from publishing the URLs to their work. Often they simply say something like, "just Google the name of the project." It is interesting that Web search engines have become good enough that we are now relying on them to indirectly address material. We use them to enable us to tell stories (which humans remember more easily) instead of directly giving an address. Many of my colleagues now take a similar approach to Web browser bookmarks. Why bookmark when searching is often faster?
Are we seeing the death of the URL as a human-centered address in the way that the Domain Name Service replaced our need to directly communicate IP addresses? If we are not communicating URLs directly to humans, do we need to spend so much time making them human-readable?
Are we seeing the death of the URL as a human-centered address in the way that the Domain Name Service replaced our need to directly communicate IP addresses? If we are not communicating URLs directly to humans, do we need to spend so much time making them human-readable?
International Conference on Software Maintenance (ICSM) 2006
Following Software Evolvability 2006, I attended the first day of the IEEE International Conference on Software Maintenance (ICSM 2006), held in Philadelphia, Pennsylvania, U.S.A. on 25 September 2006.
Jean-Sebastien Boulanger of McGill University presented a tool to analyze Java programs for separating concern interfaces. They were particularly concerned with ensuring the principle of information hiding was followed when implementing interfaces. Their tool is called JMantlet and is an Eclipse plug-in. They analyzed the transaction concern in 26 versions of JBoss and, unsurprisingly, found that as the software evolved the implementations consistently became more aware of the transaction API.
Jonathan Maletic of Kent State University reported on a tool to generate documentation for method stereotypes (StereoCode). It is only for C++, but looks like a useful tool. I wish they had done it for Java. They also made use of the srcML tool to parse the raw source.
Suzanne Crech Previtali of ETH Zurich presented the best paper that I saw at the conference. Granted, I was only there for Sunday's workshop and Monday, but their work has some real promise. The paper was entitled, "Dynamic Updating of Software Systems Based on Aspects". They used an aspect-equipped JVM (Prose) to allow updates to running Java systems. Noting that classes or methods could be removed, added, modified or left unchanged, they first analyzed (automatically) which changes need to be made, then ordered those changes to prevent unwanted side effects, then inserted the updates systematically. I really liked their approach. Being a pessimist by experience, however, I did suggest to Suzanne that they not actually remove classes and methods to be removed. Instead, I suggested that they leave them there and modify them to simply throw an exception when called. Practically, because mistakes get made, there should be some more study done to ensure that this process works as advertised.
Jean-Sebastien Boulanger of McGill University presented a tool to analyze Java programs for separating concern interfaces. They were particularly concerned with ensuring the principle of information hiding was followed when implementing interfaces. Their tool is called JMantlet and is an Eclipse plug-in. They analyzed the transaction concern in 26 versions of JBoss and, unsurprisingly, found that as the software evolved the implementations consistently became more aware of the transaction API.
Jonathan Maletic of Kent State University reported on a tool to generate documentation for method stereotypes (StereoCode). It is only for C++, but looks like a useful tool. I wish they had done it for Java. They also made use of the srcML tool to parse the raw source.
Suzanne Crech Previtali of ETH Zurich presented the best paper that I saw at the conference. Granted, I was only there for Sunday's workshop and Monday, but their work has some real promise. The paper was entitled, "Dynamic Updating of Software Systems Based on Aspects". They used an aspect-equipped JVM (Prose) to allow updates to running Java systems. Noting that classes or methods could be removed, added, modified or left unchanged, they first analyzed (automatically) which changes need to be made, then ordered those changes to prevent unwanted side effects, then inserted the updates systematically. I really liked their approach. Being a pessimist by experience, however, I did suggest to Suzanne that they not actually remove classes and methods to be removed. Instead, I suggested that they leave them there and modify them to simply throw an exception when called. Practically, because mistakes get made, there should be some more study done to ensure that this process works as advertised.
Software Evolvability 2006 in Philadelphia
I attended the Second International IEEE Workshop on Software Evolvability (program) at the IEEE International Conference on Software Maintenance (ICSM 2006), held in Philadelphia, Pennsylvania, U.S.A. on 24 September 2006.
Papers referenced below should become available on IEEE Xplore.
Why is it that on the academic publication scale journals trump conferences and conferences trump workshops and yet the most interesting interactions with colleagues and opportunities for creative insights occur at workshops? I think it is an unintended consequence of the drive to publish.
The event was held at the Sheraton Society Hill on Philadelphia's Dock Street. It is a perfect location for visitors. It is close to the historic district, home to the Liberty Bell and Independence Hall. The City Tavern, founded in 1773, is just around the corner. I particularly recommend the sweet potato and pecan biscuits, reportedly a favorite of Thomas Jefferson.
Commodore George Dewey's flagship USS Olympia lies on the Delaware River, just two blocks from the hotel. It was an easy walk, so I toured her during the lunch break. I have often used the Spanish-American War of 1898 as an analogy for the current war in Iraq (in that it was a war of choice, opened unilaterally by the US president for bogus reasons and resulted in decades of unwanted and expensive American entanglements overseas). It was thus fascinating to stand on the bridge of Olympia, from which Commodore Dewey said, "You may fire when ready, Gridley.", defeated the Spanish at Manila and established an American territory over the objection of the Philippine resistance government. Olympia is a fascinatingly transitional example of naval architecture. She bristled with guns, has a steel hull and was the first warship fitted with a water cooler. She had refrigeration, based on expansion of compressed air (not freon). Her copper coffee boiler was thoughtfully lined with lead (!) to prevent copper poisoning. She is a wonder of space, except in the crew's quarters, which still had hammocks and co-located mess tables. Her officers staterooms were each private and furnished with heavy wooden bureaus, wardrobes and desks, much nicer accommodation than on modern warships. On the other hand, the beautiful glass-and-wood skylight above her officers' country could have been lifted from a wooden ship of the line. She carries two steel masts used for auxiliary sail (a good idea when burning 20 tons of coal per hour when at top speed) and a metal-componented "rope" ladder to her forward lookout station. The battleship New Jersey, huge, imposing and arguably an icon of the military dominance of the United States during the twentieth century, was visible at her museum berth across the river at Camden, New Jersey. The Olympia was there when the US became a military superpower and the New Jersey helped to assure its dominance for nearly a half century.
Bob Laddaga from BBN Technologies gave the workshop's keynote on self-adaptive software. Unfortunately, he, like many older geeks, insisted upon applying control theory to large-scale, non-linear systems. I understand the temptation - I learned control theory when I was young, too. That doesn't make it a good idea. He admitted that the available mathematics were insufficient to the task, but apparently didn't have a better approach. We really need to break out of industrial modes of thinking to address these problems.
Christopher Nehaniv from the University of Hertfordshire spoke on "What Software Evolution and Biological Evolution Don't Have in Common". I was predisposed to enjoy this talk because he started off by mentioning that evolutionary approaches could equally apply to memetics. He used the design of spoons over time as an example. The paper was an attempt to rigorously define the differences between the natural and software domains. They defined the term genotype as inheritable information and phenotype as everything else (non-inheritable information). We could all argue that for a while.
The workshop chair, Paul Wernick (also of the University of Hertfordshire), asked whether certain software metrics, such as loose coupling between modules or high cohesion within modules had been proven to be correct. Nobody in the room knew the answer, but they all suspected not, at least not rigorously. I really need to track down the answer to that.
I wonder if whether the problem with applying evolutionary algorithms to software will suffer the same fate as control theory as applied to software. We are desperately searching for the right analogy, but having trouble defining what a genotype, or a phenotype or evolution in general are in relation to software. Which one is more complicated, software or biological species? Humans would seem to have roughly 30,000 (rather complicated) genes, and some pine trees have roughly 45,000 much simpler genes. Are the "genes" in software systems simple or complex? In short, applying Darwinian concepts may not make sense in that software does not ncessarily have a population of discrete peers, and so population genetics may not apply directly. Software evolution may need a new theory, developed from the beginning. Taqi Jaffri of Microsoft had the same thought. We talked during a break and he suggested that software evolvability may be the more general case that the biological evolution may be the smaller special case. I tend to think of it more like the figure below:
Interestingly, Nehaniv suggested that the software lifecycle concept should be considered harmful. The reason is that the lifecycle treats a software system in isolation from the system's deployed environment and its developers' intentions. Of course, that seems to be a direct analogy to Darwinian evolution ;)
One of the participants shouted out that one may control for result, or for process, but not for both. Reference?
Chris Landauer from the Aerospace Corporation spoke on "Wrapping Architectures for Long-Term Sustainability". He noted that the three aspects of a sustainable system are functionality, the expected environment and the expected styles of use. People focus on the first, to the exclusion of the other two. That three-prong model was a theme of the workshop.
Nicely, Chris quoted a colleque as calling software the "intellectual caulking material" of our constructed complex systems. I think that applies increasingly to our society as a whole. Consider our kitchens. The refrigerator in mind contains a control board which implements a control system to minimize energy usage. That board has an EEPROM on it, which is embedded software. The designers could certainly have created a purely analog control board, or an electronic one without an EEPROM, but they didn't. They used software as a caulking material.
Wen Jun Meng from Concordia University spoke on "A Context-Driven Software Comprehension Process Model". Her team took a workflow process to enhance software comprehension. They used an OWL-DL ontology and the Racer reasoner. The system is story-based and implemented as an Eclipse plug-in.
Markus Reitz of the University of Kaiserslautern spoke on "Software Evolvability by Component-Orientation - A Loosely Coupled Component Model Inspired by Biology". This is one of the first times I have seen a model of software components based on an ant colony analogy. It looks interesting. However, there are already so many software component models in existence. This one will have to prove that it is useful and that existing ones couldn't serve. Markus claims that his model was necessary. I will have to read the paper.
Bob Dickerson's thoughtful paper on the poor state of the computer industry was read in absentia by Paul Wernick. He had a cute take on software development methodologies, calling them "like tying an elephant to a knitting needle to keep it from rampaging on". I like it. Obviously, one needs a thicker needle.
I did have to correct Paul on one point in Bob's paper. Tim Berners-Lee did not invent the hyperlink (Ted Nelson did). TBL invented the URI.
Bob has decided that procedural programming, the major means of coding and a direct result of Van Neumann's original and still-used computer architecture, is a failure. He criticized the results of procedural programming strongly. However, I tend to support Fred Brooke's view that the problem is not the encoding of ideas, it is the mapping of intent to code. I think that most problems that we have with software development are directly relatable to a failure to understand what we wanted to do.
I presented a paper on my ongoing research, entitled "Toward a Software Maintenance Methodology using Semantic Web Techniques". I added quite a bit of REST orientation to my slides, since my thoughts on the REST architectural style are newer than the paper accepted at this workshop. It was well received. I think I am finally wrapping my head around the subtlety that is REST. It will be interesting to complete the implementation and get to the user studies. I need to prove that these ideas make some positive impact.
Slinger Jansen (Utrecht University) said that my work reminded him of Anthony Finklestein at University College London. He and his students apparently developed a tool called XLinkIt. I need to see the papers.
Huzefa Kagdi of Kent State University talked about "Software-Change Prediction: Estimated+Actual". His literature review included a nice summary of approaches to mining software repositories. He noted that software change detection tends to look at a very low level of the code (within the class/function level). His tool for this work is srcML, which I would like to look at.
Per Jönsson of the Blekinge Institute of Technology spoke on "The Anatomy - An Instrument for Managing Software Evolution and Evolvability". Ericsson in Sweden has a tool known as The Anatomy. Per hopes to reverse engineer the reasons that The Anatomy helps developers and generalize the results into a software engineering style. He wanted to know if the research project made sense. I guess so, since that is basically what Roy Fielding did with the Web and REST.
Paul Wernick (yes, again) presented a paper on his ideas regarding software evolution as an Actor-Network. Actor-Network Theory (ANT) treats society as an intertwined collection of networks of interacting people, things, ideas, etc. Besides actors, the network may include mediators. Actors are constrained by their connections, as well as being free to communicate via them. Complex (in the academic sense) behavior arises. He suggests that people (both individually and as organizations) and software as peers. He thinks that treating them as separate entities might be interfering with the search for the atomic elements of software evolution. He notes that a software system is evolved consciously, it doesn't "just happen". The system influences the users, who demand changes, which creates change requests, which causes developers to change the code... An initial conclusion is that an ANT approach results in a very complex model (surprise!) and is thus difficult to understand. However, ANT could help answer "soft" questions, such as, what happens if the project sponsor loses interest?
Bob Laddaga (BBN) commented that the ANT approach might be too friendly. He sees software evolvability as more of a competition. Paul and Kirstie Bellman (Aerospace Corporation) think that there is room for both friends and enemies in ANT.
Where do requirements fit into such a model? Requirements are a dream, or illusion, shared between the actors.
Ilia Heitlager of the University of Utrecht presented "Understanding the Dynamics of Product Software Development Using the Concept of Co-Evolution". He had an interesting thought that we only use a project management philosophy to create software and that projects have to end. Compare this to product software goals: Exponential growth. Ilia is a former software startup CTO and I was amused at his observations, which sounded familiar ("real data is dirty", "Hegel's Second Law bit us").
Ilia's team implemented their own XML-based pipe description language for their Web-based product. How much easier it would have been for them to have used NetKernel and DPML/BeanShell! This highlights a real problem in the software industry. We still tend to convince ourselves that we just must reinvent the wheel. Of course, licensing is often an issue.
Ilia pointed out the Abernathy and Utterback model for product manufacturing. It relates innovation to transition time. See Figure 4 in their paper. The model was defined in 1975, but has not been picked up by the software development community for some reason. His team is interested in determining whether the Abernathy and Utterback model is applicable to software. That sounds like a really great idea.
The workshop closed with a panel discussion. There a general consensus by the end of the panel that evolution in general transcends biological evolution and that software evolution may be quite different than "decent with modification".
Kirstie Bellman: "We do our students a disservice when we tell them that Computer Science has something to do with computers. It has more to do with our own cognition." This is a good point and a nice way to say it.
Paul doesn't like the term "maintenance". We don't really maintain a software release, we really continuously add features.
Papers referenced below should become available on IEEE Xplore.
Why is it that on the academic publication scale journals trump conferences and conferences trump workshops and yet the most interesting interactions with colleagues and opportunities for creative insights occur at workshops? I think it is an unintended consequence of the drive to publish.
The event was held at the Sheraton Society Hill on Philadelphia's Dock Street. It is a perfect location for visitors. It is close to the historic district, home to the Liberty Bell and Independence Hall. The City Tavern, founded in 1773, is just around the corner. I particularly recommend the sweet potato and pecan biscuits, reportedly a favorite of Thomas Jefferson.
Commodore George Dewey's flagship USS Olympia lies on the Delaware River, just two blocks from the hotel. It was an easy walk, so I toured her during the lunch break. I have often used the Spanish-American War of 1898 as an analogy for the current war in Iraq (in that it was a war of choice, opened unilaterally by the US president for bogus reasons and resulted in decades of unwanted and expensive American entanglements overseas). It was thus fascinating to stand on the bridge of Olympia, from which Commodore Dewey said, "You may fire when ready, Gridley.", defeated the Spanish at Manila and established an American territory over the objection of the Philippine resistance government. Olympia is a fascinatingly transitional example of naval architecture. She bristled with guns, has a steel hull and was the first warship fitted with a water cooler. She had refrigeration, based on expansion of compressed air (not freon). Her copper coffee boiler was thoughtfully lined with lead (!) to prevent copper poisoning. She is a wonder of space, except in the crew's quarters, which still had hammocks and co-located mess tables. Her officers staterooms were each private and furnished with heavy wooden bureaus, wardrobes and desks, much nicer accommodation than on modern warships. On the other hand, the beautiful glass-and-wood skylight above her officers' country could have been lifted from a wooden ship of the line. She carries two steel masts used for auxiliary sail (a good idea when burning 20 tons of coal per hour when at top speed) and a metal-componented "rope" ladder to her forward lookout station. The battleship New Jersey, huge, imposing and arguably an icon of the military dominance of the United States during the twentieth century, was visible at her museum berth across the river at Camden, New Jersey. The Olympia was there when the US became a military superpower and the New Jersey helped to assure its dominance for nearly a half century.
Bob Laddaga from BBN Technologies gave the workshop's keynote on self-adaptive software. Unfortunately, he, like many older geeks, insisted upon applying control theory to large-scale, non-linear systems. I understand the temptation - I learned control theory when I was young, too. That doesn't make it a good idea. He admitted that the available mathematics were insufficient to the task, but apparently didn't have a better approach. We really need to break out of industrial modes of thinking to address these problems.
Christopher Nehaniv from the University of Hertfordshire spoke on "What Software Evolution and Biological Evolution Don't Have in Common". I was predisposed to enjoy this talk because he started off by mentioning that evolutionary approaches could equally apply to memetics. He used the design of spoons over time as an example. The paper was an attempt to rigorously define the differences between the natural and software domains. They defined the term genotype as inheritable information and phenotype as everything else (non-inheritable information). We could all argue that for a while.
The workshop chair, Paul Wernick (also of the University of Hertfordshire), asked whether certain software metrics, such as loose coupling between modules or high cohesion within modules had been proven to be correct. Nobody in the room knew the answer, but they all suspected not, at least not rigorously. I really need to track down the answer to that.
I wonder if whether the problem with applying evolutionary algorithms to software will suffer the same fate as control theory as applied to software. We are desperately searching for the right analogy, but having trouble defining what a genotype, or a phenotype or evolution in general are in relation to software. Which one is more complicated, software or biological species? Humans would seem to have roughly 30,000 (rather complicated) genes, and some pine trees have roughly 45,000 much simpler genes. Are the "genes" in software systems simple or complex? In short, applying Darwinian concepts may not make sense in that software does not ncessarily have a population of discrete peers, and so population genetics may not apply directly. Software evolution may need a new theory, developed from the beginning. Taqi Jaffri of Microsoft had the same thought. We talked during a break and he suggested that software evolvability may be the more general case that the biological evolution may be the smaller special case. I tend to think of it more like the figure below:
Interestingly, Nehaniv suggested that the software lifecycle concept should be considered harmful. The reason is that the lifecycle treats a software system in isolation from the system's deployed environment and its developers' intentions. Of course, that seems to be a direct analogy to Darwinian evolution ;)
One of the participants shouted out that one may control for result, or for process, but not for both. Reference?
Chris Landauer from the Aerospace Corporation spoke on "Wrapping Architectures for Long-Term Sustainability". He noted that the three aspects of a sustainable system are functionality, the expected environment and the expected styles of use. People focus on the first, to the exclusion of the other two. That three-prong model was a theme of the workshop.
Nicely, Chris quoted a colleque as calling software the "intellectual caulking material" of our constructed complex systems. I think that applies increasingly to our society as a whole. Consider our kitchens. The refrigerator in mind contains a control board which implements a control system to minimize energy usage. That board has an EEPROM on it, which is embedded software. The designers could certainly have created a purely analog control board, or an electronic one without an EEPROM, but they didn't. They used software as a caulking material.
Wen Jun Meng from Concordia University spoke on "A Context-Driven Software Comprehension Process Model". Her team took a workflow process to enhance software comprehension. They used an OWL-DL ontology and the Racer reasoner. The system is story-based and implemented as an Eclipse plug-in.
Markus Reitz of the University of Kaiserslautern spoke on "Software Evolvability by Component-Orientation - A Loosely Coupled Component Model Inspired by Biology". This is one of the first times I have seen a model of software components based on an ant colony analogy. It looks interesting. However, there are already so many software component models in existence. This one will have to prove that it is useful and that existing ones couldn't serve. Markus claims that his model was necessary. I will have to read the paper.
Bob Dickerson's thoughtful paper on the poor state of the computer industry was read in absentia by Paul Wernick. He had a cute take on software development methodologies, calling them "like tying an elephant to a knitting needle to keep it from rampaging on". I like it. Obviously, one needs a thicker needle.
I did have to correct Paul on one point in Bob's paper. Tim Berners-Lee did not invent the hyperlink (Ted Nelson did). TBL invented the URI.
Bob has decided that procedural programming, the major means of coding and a direct result of Van Neumann's original and still-used computer architecture, is a failure. He criticized the results of procedural programming strongly. However, I tend to support Fred Brooke's view that the problem is not the encoding of ideas, it is the mapping of intent to code. I think that most problems that we have with software development are directly relatable to a failure to understand what we wanted to do.
I presented a paper on my ongoing research, entitled "Toward a Software Maintenance Methodology using Semantic Web Techniques". I added quite a bit of REST orientation to my slides, since my thoughts on the REST architectural style are newer than the paper accepted at this workshop. It was well received. I think I am finally wrapping my head around the subtlety that is REST. It will be interesting to complete the implementation and get to the user studies. I need to prove that these ideas make some positive impact.
Slinger Jansen (Utrecht University) said that my work reminded him of Anthony Finklestein at University College London. He and his students apparently developed a tool called XLinkIt. I need to see the papers.
Huzefa Kagdi of Kent State University talked about "Software-Change Prediction: Estimated+Actual". His literature review included a nice summary of approaches to mining software repositories. He noted that software change detection tends to look at a very low level of the code (within the class/function level). His tool for this work is srcML, which I would like to look at.
Per Jönsson of the Blekinge Institute of Technology spoke on "The Anatomy - An Instrument for Managing Software Evolution and Evolvability". Ericsson in Sweden has a tool known as The Anatomy. Per hopes to reverse engineer the reasons that The Anatomy helps developers and generalize the results into a software engineering style. He wanted to know if the research project made sense. I guess so, since that is basically what Roy Fielding did with the Web and REST.
Paul Wernick (yes, again) presented a paper on his ideas regarding software evolution as an Actor-Network. Actor-Network Theory (ANT) treats society as an intertwined collection of networks of interacting people, things, ideas, etc. Besides actors, the network may include mediators. Actors are constrained by their connections, as well as being free to communicate via them. Complex (in the academic sense) behavior arises. He suggests that people (both individually and as organizations) and software as peers. He thinks that treating them as separate entities might be interfering with the search for the atomic elements of software evolution. He notes that a software system is evolved consciously, it doesn't "just happen". The system influences the users, who demand changes, which creates change requests, which causes developers to change the code... An initial conclusion is that an ANT approach results in a very complex model (surprise!) and is thus difficult to understand. However, ANT could help answer "soft" questions, such as, what happens if the project sponsor loses interest?
Bob Laddaga (BBN) commented that the ANT approach might be too friendly. He sees software evolvability as more of a competition. Paul and Kirstie Bellman (Aerospace Corporation) think that there is room for both friends and enemies in ANT.
Where do requirements fit into such a model? Requirements are a dream, or illusion, shared between the actors.
Ilia Heitlager of the University of Utrecht presented "Understanding the Dynamics of Product Software Development Using the Concept of Co-Evolution". He had an interesting thought that we only use a project management philosophy to create software and that projects have to end. Compare this to product software goals: Exponential growth. Ilia is a former software startup CTO and I was amused at his observations, which sounded familiar ("real data is dirty", "Hegel's Second Law bit us").
Ilia's team implemented their own XML-based pipe description language for their Web-based product. How much easier it would have been for them to have used NetKernel and DPML/BeanShell! This highlights a real problem in the software industry. We still tend to convince ourselves that we just must reinvent the wheel. Of course, licensing is often an issue.
Ilia pointed out the Abernathy and Utterback model for product manufacturing. It relates innovation to transition time. See Figure 4 in their paper. The model was defined in 1975, but has not been picked up by the software development community for some reason. His team is interested in determining whether the Abernathy and Utterback model is applicable to software. That sounds like a really great idea.
The workshop closed with a panel discussion. There a general consensus by the end of the panel that evolution in general transcends biological evolution and that software evolution may be quite different than "decent with modification".
Kirstie Bellman: "We do our students a disservice when we tell them that Computer Science has something to do with computers. It has more to do with our own cognition." This is a good point and a nice way to say it.
Paul doesn't like the term "maintenance". We don't really maintain a software release, we really continuously add features.
Saturday, September 09, 2006
Pluto's rePublic
The Smithsonian Institution has a nice set of markers along the National Mall, starting in front of the National Air & Space Museum. The markers provide information about the planets in our solar system, including Pluto, and are spaced proportionally to their mean distance from the sun. Pluto's marker is a long way down the street, in front of the original Smithsonian castle.
Members of the public left condolences at the marker following Pluto's demotion to a dwarf planet by the IAU. Details of the notes at the base of the marker are available here.
I reported on Pluto's status change in the post Planet Status Resolved.
Members of the public left condolences at the marker following Pluto's demotion to a dwarf planet by the IAU. Details of the notes at the base of the marker are available here.
I reported on Pluto's status change in the post Planet Status Resolved.
Friday, August 25, 2006
Canis Non Gratus
My dog is an Australian yellow Lab, almost eleven years old. He is a happy, easy-going character (pretty much the only one in our house!). He is happy to adjust to differing schedules, doesn't mind changing his meal times, walks himself when we are very busy and doesn't demand much. Even people who don't like dogs generally come to get along with him.
Unfortunately, he has been exhibiting a strange new behavior in the last three weeks. Every morning at around 5:30 AM, he puts his paws on my side of the bed and pants loudly. Even after I make him get down, he stares at me and pants heavily. He doesn't want to go outside, doesn't want a pet, doesn't want water, doesn't want to play. He is just suddenly quite uncomfortable. Unable to figure this out, we have taken to making him leave the room so we can sleep.
I took him to our local veterinarian. No problems. Healthy as a, err, larger quadruped.
After a couple of days, it occurred to me that he might be hearing an ultrasonic sound from our lawn's sprinkler system. It starts at about that time. So I did the obvious thing: I changed the time that the sprinkler started. No luck. At 5:30 AM I had paws on my arm and doggie breath in my ear. We were at a loss. How could my old friend suddenly become canis non gratus?
Separately, we began to suspect that the sprinkler was going off twice. The manual was unclear regarding the operation of a three-positin slider switch. Was the switch to select which of three programs would run? Or to select which of three programs you could adjust with the other controls? I had presumed the first. It was the second. Creating another program simply ran both. The sprinkler was still starting at 5:30 AM. I cleared the first program.
We waited with bated breath the next night. Would we sleep through or be jarred into full activity by our canine interlocutor? It worked. When we awoke, our dog was still sleeping peacefully. We had found the problem.
Now, what do we do with a sprinkler system buried in our yard at great expense which distresses our dog?
Unfortunately, he has been exhibiting a strange new behavior in the last three weeks. Every morning at around 5:30 AM, he puts his paws on my side of the bed and pants loudly. Even after I make him get down, he stares at me and pants heavily. He doesn't want to go outside, doesn't want a pet, doesn't want water, doesn't want to play. He is just suddenly quite uncomfortable. Unable to figure this out, we have taken to making him leave the room so we can sleep.
I took him to our local veterinarian. No problems. Healthy as a, err, larger quadruped.
After a couple of days, it occurred to me that he might be hearing an ultrasonic sound from our lawn's sprinkler system. It starts at about that time. So I did the obvious thing: I changed the time that the sprinkler started. No luck. At 5:30 AM I had paws on my arm and doggie breath in my ear. We were at a loss. How could my old friend suddenly become canis non gratus?
Separately, we began to suspect that the sprinkler was going off twice. The manual was unclear regarding the operation of a three-positin slider switch. Was the switch to select which of three programs would run? Or to select which of three programs you could adjust with the other controls? I had presumed the first. It was the second. Creating another program simply ran both. The sprinkler was still starting at 5:30 AM. I cleared the first program.
We waited with bated breath the next night. Would we sleep through or be jarred into full activity by our canine interlocutor? It worked. When we awoke, our dog was still sleeping peacefully. We had found the problem.
Now, what do we do with a sprinkler system buried in our yard at great expense which distresses our dog?
Thursday, August 24, 2006
Apple Recalling Laptop Batteries
Apple Computer is recalling batteries for the iBook G4 and Powerbook G4 sold between October 2003 and August 2006.
The form to request a new battery is here. Unfortunately, the site is being hit so hard right now that they are rejecting users. That happens either with an HTTP error, or upon form submission stating that your serial number is invalid. Just wait your turn and it will work out :)
The form to request a new battery is here. Unfortunately, the site is being hit so hard right now that they are rejecting users. That happens either with an HTTP error, or upon form submission stating that your serial number is invalid. Just wait your turn and it will work out :)
Planet Status Resolved
The International Astronomical Union (IAU) has made its decision on the definition of a planet. Resolution 5A: Definition of 'planet' states:
The IAU therefore resolves that "planets" and other bodies in our Solar System be defined into three distinct categories in the following way:
(1) A "planet" is a celestial body that (a) is in orbit around the Sun, (b) has sufficient mass for its self-gravity to overcome rigid body forces so that it assumes a hydrostatic equilibrium (nearly round) shape, and (c) has cleared the neighbourhood around its orbit.
(2) A "dwarf planet" is a celestial body that (a) is in orbit around the Sun, (b) has sufficient mass for its self-gravity to overcome rigid body forces so that it assumes a hydrostatic equilibrium (nearly round) shape , (c) has not cleared the neighbourhood around its orbit, and (d) is not a satellite.
(3) All other objects except satellites orbiting the Sun shall be referred to collectively as "Small Solar-System Bodies".
Thus, Pluto is no longer considered a planet. Similarly, Charon, Ceres and 2003 UB 313 (which is not called "Xena" by the IAU) are not classified as planets. The proposal to do so was rejected.
I note that they begged the definition of "satellite", but that's fine with me. Categorization can easily be taken too far.
The eight planets in our solar system are now Mercury, Venus, Earth, Mars, Jupiter, Saturn, Uranus, and Neptune.
I am pleased to see that the IAU made this decision carefully. Although my suggestion for a resolution was never considered, the end result is similar. We are safe from the possibility of discovering hundreds of new "planets" in our system.
The IAU therefore resolves that "planets" and other bodies in our Solar System be defined into three distinct categories in the following way:
(1) A "planet" is a celestial body that (a) is in orbit around the Sun, (b) has sufficient mass for its self-gravity to overcome rigid body forces so that it assumes a hydrostatic equilibrium (nearly round) shape, and (c) has cleared the neighbourhood around its orbit.
(2) A "dwarf planet" is a celestial body that (a) is in orbit around the Sun, (b) has sufficient mass for its self-gravity to overcome rigid body forces so that it assumes a hydrostatic equilibrium (nearly round) shape , (c) has not cleared the neighbourhood around its orbit, and (d) is not a satellite.
(3) All other objects except satellites orbiting the Sun shall be referred to collectively as "Small Solar-System Bodies".
Thus, Pluto is no longer considered a planet. Similarly, Charon, Ceres and 2003 UB 313 (which is not called "Xena" by the IAU) are not classified as planets. The proposal to do so was rejected.
I note that they begged the definition of "satellite", but that's fine with me. Categorization can easily be taken too far.
The eight planets in our solar system are now Mercury, Venus, Earth, Mars, Jupiter, Saturn, Uranus, and Neptune.
I am pleased to see that the IAU made this decision carefully. Although my suggestion for a resolution was never considered, the end result is similar. We are safe from the possibility of discovering hundreds of new "planets" in our system.
Tuesday, August 22, 2006
webMethods buys Cerebra
webMethods has announced that it has bought Cerebra. That is almost assuredly a fire sale, in spite of the positive spin put on it by analysts.
Saturday, August 19, 2006
Mulgara v1.0.0 Released
The Mulgara project is pleased to announce Mulgara version 1.0.0. Binary and source distributions are available for download now. This is the initial release for this fork of Kowari and marks the reestablishment of a development team capable of continuing the project.
Thanks to all who participated in this release. New developers and users are actively encouraged to contribute!
Thanks to all who participated in this release. New developers and users are actively encouraged to contribute!
Northrop Grumman Drops Tucana References
Northrop Grumman Electronic Systems has dropped the Tucana pages from their Web site. No explanation has been offered.
David Watanabe Fixes the OS/X Desktop
OS/X is pretty great. Still, Joy's Law ("No matter who you are, most of the smartest people work for someone else.") assures that innovation will occur outside of Cupertino. David Watanabe has done that for Internet searching and news summaries with two great products: Inquisitor and NewsFire.
Inquisitor is a search completion and suggestion tool for the Safari and Camino browsers, sort of like a built-in Google Suggest. I have been using the Safari version since 1.0.0 and it really does make my life easier. It saves time, which I never have enough of.
NewsFire is by far the best RSS news reader for Mac OS/X. Just buy it. No kidding. Safari's RSS reading cannot compare.
Inquisitor is a search completion and suggestion tool for the Safari and Camino browsers, sort of like a built-in Google Suggest. I have been using the Safari version since 1.0.0 and it really does make my life easier. It saves time, which I never have enough of.
NewsFire is by far the best RSS news reader for Mac OS/X. Just buy it. No kidding. Safari's RSS reading cannot compare.
Thursday, August 17, 2006
Rick Brant Electronic Adventures
In the early 1970s, my parents found part of an old juvenile book series in a garage sale and bought them for me. They turned out to be fourteen of the Rick Brant Electronic Adventure series and they literally set me on fire. I loved science and engineering even as a kid and dreaded the day I would finish the last one. Now my son is eight and my parents gave me the books which were still in their basement. We started reading them straight away. That let me to Amazon, Powell, Google and the very handy Abebooks to see if I could buy a few more. Happily, a reasonable sum and a bit of research netted 23 of the 24-book series plus the interesting Science Projects.
The last book in the series, The Magic Talisman, goes for a cool US$1500 these days. Only 500 were printed. That is strange, considering the series sold into the millions of copies. Publishers are certainly fickle. Fortunately, OCLC's WorldCat service shows that four US libraries have the book, so it should be available via inter-library loan.
Hal Goodwin, a US government journalist who involved himself in radically different aspects of science during his varied career, wrote the series under the pen name John Blaine. He was amazingly prolific, writing 43 books, including some children's non-fiction.
I was pleased to find that other fans of the series banded together to get some books reprinted. Spindrift Books, a reference to the island home of Rick and his family, has recently reprinted three hard-to-find books. The main fan site even has detailed errata for the series.
My son is thoroughly enjoying the series and I am pleased that he won't suffer the loss of many of the books as I did. I am enjoying reading them, especially the ones that are new to me. They provide a wonderful opportunity to talk about science, geography, cultures, logic, observation. We are looking forward to making some of the Science Projects, especially after seeing the warning, "Note: These experiments have not been written with the modern reader in mind. Some may be dangerous and should not be undertaken." All right! That reminds me of the time I showed some friends at a dinner party how to make ozone by splitting a lamp cord and shoving it into a sink full of water while it was plugged in. (You can smell it.)
I highly recommend these books to anyone with a boy between the ages of 8 and 12.
The last book in the series, The Magic Talisman, goes for a cool US$1500 these days. Only 500 were printed. That is strange, considering the series sold into the millions of copies. Publishers are certainly fickle. Fortunately, OCLC's WorldCat service shows that four US libraries have the book, so it should be available via inter-library loan.
Hal Goodwin, a US government journalist who involved himself in radically different aspects of science during his varied career, wrote the series under the pen name John Blaine. He was amazingly prolific, writing 43 books, including some children's non-fiction.
I was pleased to find that other fans of the series banded together to get some books reprinted. Spindrift Books, a reference to the island home of Rick and his family, has recently reprinted three hard-to-find books. The main fan site even has detailed errata for the series.
My son is thoroughly enjoying the series and I am pleased that he won't suffer the loss of many of the books as I did. I am enjoying reading them, especially the ones that are new to me. They provide a wonderful opportunity to talk about science, geography, cultures, logic, observation. We are looking forward to making some of the Science Projects, especially after seeing the warning, "Note: These experiments have not been written with the modern reader in mind. Some may be dangerous and should not be undertaken." All right! That reminds me of the time I showed some friends at a dinner party how to make ozone by splitting a lamp cord and shoving it into a sink full of water while it was plugged in. (You can smell it.)
I highly recommend these books to anyone with a boy between the ages of 8 and 12.
Mulgara v1.0.0 Coming Any Day Now
Paul and I are on the last bits of documentation and legalities to release Mulgara v1.0.0. It should happen really soon.
Wednesday, August 16, 2006
New Planets Proposal
The International Astronomical Union (IAU) is considering a proposal that would finally define a planet. The proposal is:
- The object must be in orbit around a star, but must not itself be a star
- It must have enough mass for the body's own gravity to pull it into a nearly spherical shape
Ceres, Pluto (and Charon) and 2003 UB 313 will all qualify under the new definition.
Personally, (and the IAU didn't ask me!), I'd rather add a criterion like:
- It must have sufficient gravity to sustain an atmosphere if conditions would support one.
I think that would eliminate the silly little bodies and still offer some way forward. Do we really want a hundred "planets" in our solar system?
- The object must be in orbit around a star, but must not itself be a star
- It must have enough mass for the body's own gravity to pull it into a nearly spherical shape
Ceres, Pluto (and Charon) and 2003 UB 313 will all qualify under the new definition.
Personally, (and the IAU didn't ask me!), I'd rather add a criterion like:
- It must have sufficient gravity to sustain an atmosphere if conditions would support one.
I think that would eliminate the silly little bodies and still offer some way forward. Do we really want a hundred "planets" in our solar system?
Monday, August 14, 2006
Multitouch Computing
If you are the least bit interested in human-computer interface designs, you owe it to your self to check out Jeff Han's talk at TED 2006. My vote for best quote goes to "There is no reason in this day and age that we should be conforming to a physical device". Thanks to Brian for the pointer.
Application of the Anthropic Principle
Island, over at Science in Crisis, asked me to comment on his post Global Warming too?... what next, politics? after reading my thoughts regarding Critics of Global Warming. I am going to duplicate my thoughts here for archival purposes:
Island was kind enough to invite me to comment on this discussion. I am pleased to do so, since we can all learn if we all talk.
It would seem at first reading that Island believes in the strong anthropic principle, whereas I believe in the weak anthropic principle. The two are very different, indeed. The strong version insists that everything will always work out because we are meant to be here. I think that is a great way of burying one's head in the sand.
I am an evolutionist, for the simple reason that the evolutionary algorithm is testable and, further, I have tested it with computer models. It is a simple and elegant theory which explains much. The evolutionary algorithm has ramifications for this discussion because it can explain our likely future if we continue to "piss in our rice bowl". I once created a small agent model to show what happens to a human society when they overreach their environment's capacity to support them. Try it. Read the text under the applet and then try running the model with the default settings. Then try varying the settings. Download the source code and look at the algorithms I used. In many, many cases, humans simply starve to death. All of them. The environment eventually recovers. Unless you can find a reason to dispute the underlying presumptions of the model, the result should cause you to agree with David Suzuki: The Earth is not in trouble. We are.
The weak anthropic principle, on the other hand, states that, of all the possible ways the universe could have evolved, at least one of them had to allow us to evolve. No hand of God is implicated, in spite of Island's suggestion to the contrary. The weak anthropic principle makes no claim about our future at all. It does not guarantee our safety. It does not suggest that the universe is not a dangerous place. It leaves room for us to become just another evolutionary dead end. It is up to us to stop that from happening. We have a better chance of manipulating the situation for our own survival than any species before us, but we have to work for it.
The economic principle the Tragedy of the Commons illustrates our situation best. Humans have become the undisputed top predator in our environment. No other species threatens us now. As we compete with each other, we fall into the Tragedy of the Commons, where the common good is subsumed by short-term greed. That was not critical to the entire species before, although we certainly saw the effect when the Greeks destroyed their farming centers through poor irrigation, Rome devastated Libya's fields and China began its cycle of spasmotic famines. Now the Commons stretches across the globe. We only have two choices: Allow the Tragedy of the Commons to play out once again, collapsing our civilization the way it has collapsed others or agree that the common good really should take precedence this time.
Island was kind enough to invite me to comment on this discussion. I am pleased to do so, since we can all learn if we all talk.
It would seem at first reading that Island believes in the strong anthropic principle, whereas I believe in the weak anthropic principle. The two are very different, indeed. The strong version insists that everything will always work out because we are meant to be here. I think that is a great way of burying one's head in the sand.
I am an evolutionist, for the simple reason that the evolutionary algorithm is testable and, further, I have tested it with computer models. It is a simple and elegant theory which explains much. The evolutionary algorithm has ramifications for this discussion because it can explain our likely future if we continue to "piss in our rice bowl". I once created a small agent model to show what happens to a human society when they overreach their environment's capacity to support them. Try it. Read the text under the applet and then try running the model with the default settings. Then try varying the settings. Download the source code and look at the algorithms I used. In many, many cases, humans simply starve to death. All of them. The environment eventually recovers. Unless you can find a reason to dispute the underlying presumptions of the model, the result should cause you to agree with David Suzuki: The Earth is not in trouble. We are.
The weak anthropic principle, on the other hand, states that, of all the possible ways the universe could have evolved, at least one of them had to allow us to evolve. No hand of God is implicated, in spite of Island's suggestion to the contrary. The weak anthropic principle makes no claim about our future at all. It does not guarantee our safety. It does not suggest that the universe is not a dangerous place. It leaves room for us to become just another evolutionary dead end. It is up to us to stop that from happening. We have a better chance of manipulating the situation for our own survival than any species before us, but we have to work for it.
The economic principle the Tragedy of the Commons illustrates our situation best. Humans have become the undisputed top predator in our environment. No other species threatens us now. As we compete with each other, we fall into the Tragedy of the Commons, where the common good is subsumed by short-term greed. That was not critical to the entire species before, although we certainly saw the effect when the Greeks destroyed their farming centers through poor irrigation, Rome devastated Libya's fields and China began its cycle of spasmotic famines. Now the Commons stretches across the globe. We only have two choices: Allow the Tragedy of the Commons to play out once again, collapsing our civilization the way it has collapsed others or agree that the common good really should take precedence this time.
Tuesday, August 08, 2006
More on Faith and Reason
Weeble had an interesting post on Faith and Reason. It is an endless topic for our generation, but I'll wade in again anyway.
Readers interested in the relationship between intelligent design and science may be interested in my post on refuting irreducible complexity, a cornerstone of the current intelligent design concept.
My wife, a thoughtful woman wishing to find a way of thinking which encompasses spirituality and science, recently suggested that I read The Language of God. Unfortunately, I think upon review that author and scientist Francis Collins violated the useful thumbrule of Occam's razor. William of Occam, himself a Christian friar, said in the 14th Century that "the explanation of any phenomenon should make as few assumptions as possible, eliminating those that make no difference in the observable predictions of the explanatory hypothesis or theory." Collins may also have confused the strong and weak versions of the anthropic principle, but I haven't read enough to confirm that yet. Certainly the strong version is used as an argument for intelligent design and the weak version is a common scientific belief.
Readers interested in the relationship between intelligent design and science may be interested in my post on refuting irreducible complexity, a cornerstone of the current intelligent design concept.
My wife, a thoughtful woman wishing to find a way of thinking which encompasses spirituality and science, recently suggested that I read The Language of God. Unfortunately, I think upon review that author and scientist Francis Collins violated the useful thumbrule of Occam's razor. William of Occam, himself a Christian friar, said in the 14th Century that "the explanation of any phenomenon should make as few assumptions as possible, eliminating those that make no difference in the observable predictions of the explanatory hypothesis or theory." Collins may also have confused the strong and weak versions of the anthropic principle, but I haven't read enough to confirm that yet. Certainly the strong version is used as an argument for intelligent design and the weak version is a common scientific belief.
Thursday, July 27, 2006
Economic Recursion
Dan has cracked the code on computer upgrades. He decided to buy Apple shares to finance the purchase of his new Apple PowerMac. Apple's success is therefore key to getting Dan's business. Not only is this conceptually pretty cool, it occured to me that it is not even uncommon. We all do this sort of thing every day. We buy clothes and wear them to the store where we bought them. We buy gas and use it to drive to a service center to keep our car running, where we buy parts and use them to get us to a gas station. I suppose this sort of circular consuming and producing is the basis of any economy bigger than simple subsistance agriculture, where each individual makes most of what they need themselves. Perhaps this is obvious to economists, but I thought Dan's solution was an elegant one (in a bull market). It certainly got me thinking about how all parties in a complex economy depend on each other.
Perhaps the high degree of economic interdependence explains why societies can collapse so rapidly when the social contracts needed to run them are disrupted. I am thinking about Iraq, of course, but it seems this is the basis for a simple economic argument against breaking a country's social system. They are easy to break, but hard to repair.
Perhaps the high degree of economic interdependence explains why societies can collapse so rapidly when the social contracts needed to run them are disrupted. I am thinking about Iraq, of course, but it seems this is the basis for a simple economic argument against breaking a country's social system. They are easy to break, but hard to repair.
Monday, July 17, 2006
Mulgara's Initial Release
Mulgara is going ahead. The initial release looks set for the end of this month (July 2006). This first release will be based on an older version of Kowari from 1 August 2005. That date will avoid any code contributed to or by Northrup Grumman Corporation, even though their recent correspondance allows its use. It will also avoid a reported scalability bug while we investigate.
Unfortunately, several old Kowari bugs will temporarily reappear in Mulgara, but users may expect them to be quickly repaired. Those bugs include the lack of permanent model names and the presence of the Jena API and RDQL support.
Perhaps Mulgara can one day recombine with Kowari. The Mulgara team remains open to that possibility, but needs to move forward without waiting for further talks. Users are encouraged to follow Mulgara for new features in the meantime.
Unfortunately, several old Kowari bugs will temporarily reappear in Mulgara, but users may expect them to be quickly repaired. Those bugs include the lack of permanent model names and the presence of the Jena API and RDQL support.
Perhaps Mulgara can one day recombine with Kowari. The Mulgara team remains open to that possibility, but needs to move forward without waiting for further talks. Users are encouraged to follow Mulgara for new features in the meantime.
Thursday, July 06, 2006
Northrop Grumman Backs Down on Kowari
Northrop Grumman Corporation has sent an email affirming that the code currently checked into Kowari's Sourceforge project is "non-proprietary, open source software, licensed freely to all under the Mozilla Public License, version 1.1". They have dropped their claim that some of the code is tainted.
The Northrop lawyer specifically referred to their letter to me last January and apologized for its results.
Mulgara developers are discussing what impact, if any, this news will have on the fork.
The Northrop lawyer specifically referred to their letter to me last January and apologized for its results.
Mulgara developers are discussing what impact, if any, this news will have on the fork.
Ant Pedometers
Agent modeling is looking as an even more powerful mechanism for modeling the real world. Ants, it seems, count their steps in a form of simple dead reckoning. Thanks to Nova Spivack for the pointer.
Brains for Phenotypic Advantage Over Environment?
I had a thought in January 2004 while re-reading Daniel Dennett's Darwin's Dangerous Idea and thinking about Jared Diamond's postulated "Great Leap Forward" in human capacity 50K years ago.
Consider that humans "suddenly" exploded all over the face of the earth 50K years ago. This is a big deal because all other species are constrained to a bounded habitat. This event was almost certainly the result of an increase in human brain capability.
Dennett says we are allowed to take the "intentional stance" when trying to understand evolutionary phenomena. That is, we can ask what something is "good for". Indeed, he says that the intentional stance is necessary for any real reverse engineering effort.
So, taking the intentional stance, we can say that the mutation that spread through human brains 50K years ago was "good for" spreading human phenotypes into new habitats.
Now, genes don't care about environment. Environments impact genes' ability to survive. Phenotypes care a *lot* about the environment, but they are stuck with the hand they are dealt.
I think it makes sense to think about the mutation that caused the "Great Leap Forward" as a mutation for allowing (fixed) phenotypes to extend their habitat (by thinking, having better mental maps of cause/effect relationships, whatever). Phenotypes which could have the option of moving to better environments could presumably breed more, embedding the mutation in the gene pool.
Humans can survive in difficult environments (e.g. Siberia) by thinking and by passing down "good" thoughts ("memes") via cultural transmission. It seems to make sense that our distant ancestors did not have that capability, and so were limited to more forgiving environments for their phenotypes (e.g. equatorial Africa).
This difference in phenotypic capability is certainly modelable, but I am not sure whether the model could be tested against any hard evidence. At best, one could show that the modeled capability is one possible mechanism that would allow humans to survive in Siberia (a mutation that gave you blubber being another!). That's OK, since there are many possible paths through Darwinian design space and I think the best anyone can show would be a valid path, which might differ from the single path actually trodden by humans without further evidence from genetics/archaeology/etc.
Consider that humans "suddenly" exploded all over the face of the earth 50K years ago. This is a big deal because all other species are constrained to a bounded habitat. This event was almost certainly the result of an increase in human brain capability.
Dennett says we are allowed to take the "intentional stance" when trying to understand evolutionary phenomena. That is, we can ask what something is "good for". Indeed, he says that the intentional stance is necessary for any real reverse engineering effort.
So, taking the intentional stance, we can say that the mutation that spread through human brains 50K years ago was "good for" spreading human phenotypes into new habitats.
Now, genes don't care about environment. Environments impact genes' ability to survive. Phenotypes care a *lot* about the environment, but they are stuck with the hand they are dealt.
I think it makes sense to think about the mutation that caused the "Great Leap Forward" as a mutation for allowing (fixed) phenotypes to extend their habitat (by thinking, having better mental maps of cause/effect relationships, whatever). Phenotypes which could have the option of moving to better environments could presumably breed more, embedding the mutation in the gene pool.
Humans can survive in difficult environments (e.g. Siberia) by thinking and by passing down "good" thoughts ("memes") via cultural transmission. It seems to make sense that our distant ancestors did not have that capability, and so were limited to more forgiving environments for their phenotypes (e.g. equatorial Africa).
This difference in phenotypic capability is certainly modelable, but I am not sure whether the model could be tested against any hard evidence. At best, one could show that the modeled capability is one possible mechanism that would allow humans to survive in Siberia (a mutation that gave you blubber being another!). That's OK, since there are many possible paths through Darwinian design space and I think the best anyone can show would be a valid path, which might differ from the single path actually trodden by humans without further evidence from genetics/archaeology/etc.
Memetic Nature of Sexual Selection?
What if the phenomena that we associate with Darwin's sexual selection (e.g. large breast size in humans, hair/eye color and other properties which cannot be traced to advantage reinforced by natural selection) are a consequence of memetic imitation? In other words, the underlying mechanism of sexual selection is memetic imitation in the same way that the underlying mechanism of natural selection is genetic survival.
That would be consistent with Schelling's neighborhood, the observation that people tend to prefer mates of a physical form to those encountered in childhood and possibly Jared Diamond's comments in The Third Chimpanzee regarding sexual selection.
That would be consistent with Schelling's neighborhood, the observation that people tend to prefer mates of a physical form to those encountered in childhood and possibly Jared Diamond's comments in The Third Chimpanzee regarding sexual selection.
Critics of Global Warming
K and my father both pointed to Prof. Richard S. Lindzen's article on the Wall Street Journal's OpinionJournal
Prof. Lindzen wrote a justified diatribe on both Al Gore's approach in his movie and various hyperbolic media statements regarding the ramifications of global warming. He specifically said that there is no "consensus" within the scientific community for global warming.
There is no "consensus" for the simple reason that there is no "community" and there is no mechanism for "consensus" other than polls by journalists. Daniel Dennett, the Tutfs University philosopher famous for his missives on evolution, once said that any new idea in science goes through three stages:
1. "That can't be right!"
2. "Well", in the face of overwhelming evidence, "you might be on to something..."
3. "Of course! Everybody knows that."
Those stages constitute "consensus" in the scientific community. Note that there could very well be a fourth stage when the notion enters school text books. I think that global warming is somewhere between stages (2) and (3).
Prof. Lindzen has most probably fueled the ill considered media storm that he dislikes. His article was aimed at the media and politicians coming to the wrong conclusions, but he failed to highlight where consensus does occur. He did say:
- "Most of the climate community has agreed since 1988 that global mean temperatures have increased on the order of one degree Fahrenheit over the past century, having risen significantly from about 1919 to 1940, decreased between 1940 and the early '70s, increased again until the '90s, and remaining essentially flat since 1998."
- "There is also little disagreement that levels of carbon dioxide in the atmosphere have risen from about 280 parts per million by volume in the 19th century to about 387 ppmv today."
- "Finally, there has been no question whatever that carbon dioxide is an infrared absorber (i.e., a greenhouse gas--albeit a minor one), and its increase should theoretically contribute to warming."
He made the point that the Earth's climate is dynamic, and it surely is. Understanding the Earth's climate is probably harder than rocket science and for the same fundamental reasons. Rocket combustion is complex (in the scientific sense of the word) and so is the climate. There are too many variables which interact to too many ways to measure.
So, what is going to happen? We don't know. Al Gore said it, Prof. Lindzen said it and I've just said it. However, consider this. We do know that we have been polluting the air, water and land all around us since the industrial revolution and are continuing to do so at an increasing rate. We do know that badly polluted areas are difficult to live in (such as Mexico City) or even impossible (such as Prypiat, Ukraine). Should we continue to pollute at such a rate until the science is 100% accurate in its ability to predict the future, or should we reduce our pollution rates?
Prof. Lindzen wrote a justified diatribe on both Al Gore's approach in his movie and various hyperbolic media statements regarding the ramifications of global warming. He specifically said that there is no "consensus" within the scientific community for global warming.
There is no "consensus" for the simple reason that there is no "community" and there is no mechanism for "consensus" other than polls by journalists. Daniel Dennett, the Tutfs University philosopher famous for his missives on evolution, once said that any new idea in science goes through three stages:
1. "That can't be right!"
2. "Well", in the face of overwhelming evidence, "you might be on to something..."
3. "Of course! Everybody knows that."
Those stages constitute "consensus" in the scientific community. Note that there could very well be a fourth stage when the notion enters school text books. I think that global warming is somewhere between stages (2) and (3).
Prof. Lindzen has most probably fueled the ill considered media storm that he dislikes. His article was aimed at the media and politicians coming to the wrong conclusions, but he failed to highlight where consensus does occur. He did say:
- "Most of the climate community has agreed since 1988 that global mean temperatures have increased on the order of one degree Fahrenheit over the past century, having risen significantly from about 1919 to 1940, decreased between 1940 and the early '70s, increased again until the '90s, and remaining essentially flat since 1998."
- "There is also little disagreement that levels of carbon dioxide in the atmosphere have risen from about 280 parts per million by volume in the 19th century to about 387 ppmv today."
- "Finally, there has been no question whatever that carbon dioxide is an infrared absorber (i.e., a greenhouse gas--albeit a minor one), and its increase should theoretically contribute to warming."
He made the point that the Earth's climate is dynamic, and it surely is. Understanding the Earth's climate is probably harder than rocket science and for the same fundamental reasons. Rocket combustion is complex (in the scientific sense of the word) and so is the climate. There are too many variables which interact to too many ways to measure.
So, what is going to happen? We don't know. Al Gore said it, Prof. Lindzen said it and I've just said it. However, consider this. We do know that we have been polluting the air, water and land all around us since the industrial revolution and are continuing to do so at an increasing rate. We do know that badly polluted areas are difficult to live in (such as Mexico City) or even impossible (such as Prypiat, Ukraine). Should we continue to pollute at such a rate until the science is 100% accurate in its ability to predict the future, or should we reduce our pollution rates?
Subscribe to:
Posts (Atom)