So I‘ve finished my first week of summer research, which I began on Tuesday (as Easter Monday was a holiday). I am re-revisiting the spreading and covering numbers, those devilish little fiends from combinatorial commutative algebra that plagued me last summer. You can read about last summer’s research here. I shall try to blog often about this summer’s efforts as well.
This week was very much about settling in and trying to get my mind into a math research mode. I am starting earlier than I did last year, because I will be returning to school in late August. We professional year students have to start early to finish classes in time for student-teaching in November! Unfortunately, this means that I didn’t get much of a break between the end of classes and starting research. I have tried to seize as much downtime as I could. Still, as far as summer jobs go, I won’t complain about doing research. It’s pretty choice.
I’m in the same office as I used last year, and you can see photographs in this blog post. I‘m in the desk in which Aaron is sitting in those photos, as my former desk is still occupied by a sessional professor’s things. Aaron, who is not in concurrent education, has also finished his math degree and is going on to graduate school here at Lakehead. He promises he will be getting a key to the office, however. Also, Rachael is returning to academic research starting next week, so it will soon feel like old times!
There is a new face in the office this year: Tim also has an NSERC grant to do some math research; he’s a third year student who has switched to math from biology, and so far his project involves recurrence relations.
There is one material difference in the office this year: books. The math resource room is getting new carpet, so Tim and I spent some time this week removing all of the books from the metal bookcases, packing them in what boxes were available, and stacking those boxes in our office. We cannot reach the chalkboard any more; moreover, we ran out of boxes, so there are a lot of freestanding stacks right now. It’s messy and musty and dusty and kind of cool, though I want my chalkboard back.
I didn’t take any photos; I shall do that on Monday and update this post.
I shall not say too much about how my research is going right now, because there’s not much to say. I’ll be using Macaulay2 and SHARCNET again. So far we’re looking at the orbits of vertices of my graphs; these are the equivalence classes created by the action of the symmetric group on the graph’s vertex set. We hope to find some relationship between the orbits and the size of the maximum independent set and thereby find a way to compute the spreading number. Next week I’ll be investigating this further.
Last updated Tuesday, May 3, 2011 at 8:22 PM
I have to say, I‘m experiencing some strong technology lust for the new wave of Android 3.0 tablets, beginning with the Motorola XOOM, that are hitting the market. Future Shop’s tech blog has posted some video reviews by rgbfilter that show off the XOOM, and there’s a part of me that’s saying, “Want. Want. Want.” It’s exciting to see competitors for the iPad running the first version of Android that’s “optimized for tablets,” and along with the release of the BlackBerry PlayBook, the tablet market is starting to get very interesting.
I have been somewhat sceptical of the niche tablets fill since the release of the original iPad. In retrospect, I think that was as much a reaction against the hype surrounding the iPad itself than any qualified evaluation of tablets in general. The idea that the iPad is a “game-changer” (whatever that means) was silly to me; yes, it’s a significant new product, but tablets are still in their infancy. They haven’t even started teething yet.
I’ve had my Samsung Galaxy S for about six months now, and I love it. This experience with an Android smartphone, and some good observations regarding the utility of tablets, such as this post by Peter Nowak, have caused me to change my mind. That is, I‘m a little more excited by (and about) tablets now than I was last year, and I kind of want one.
But not yet.
My philosophical difficulties with Apple preclude me from ever owning an iPad. Still, I’ll admit to lusting after the physical device itself—I‘ve never had a problem with Apple’s design, and I think the iPad is a beautiful device. So I‘ve been watching with interest the emergence of competitors, and of course my own biases make me partial to the Android crowd. Nevertheless, I still can’t justify buying a XOOM or similar device, because I‘m just not willing to pay $600 for tablets as they are now.
I know some people are, obviously, and all the more power to them. I guess I just have to face that I am not an early adopter (except, apparently, when it comes to HTML5!). Perhaps if I had a legitimate need for a tablet, rather than the mere desire for one, then I would be more amenable to the price tag. When it comes to that sort of money, however, I force myself to be honest: I don’t need a tablet right now. Between them, my venerable 4-year-old laptop and my shiny smartphone serve my needs. Sure, I can think of plenty of uses where a tablet would be more ideal—lately I’ve been bringing my phone into the living room when my dad and I watch TV, so I can sign into my IM client through it. I can see myself doing much more involved work on a tablet in that living room—coding, or writing blog articles—that just isn’t practical on my smartphone’s small screen and isn’t comfortable with a laptop in the living room chair. Likewise, a tablet is a great portable compromise in those cases where I don’t really need to bring my laptop to school but want more than just my phone.
(This last attitude, if anything, demonstrates the effect tablets are beginning to have. I suppose it’s part of the “game changer” paradigm shift iPad enthusiasts want to see. Laptops used to be the pinnacle of portability; now they are big and clunky. Tablets are sleek and shiny and sexy. How the times change.)
So a tablet would be wonderful, but I don’t need it; I just want it. And for me, $600 is too much to spend satisfying a want. Even $450 (the no-contract price AT&T is offering for the comparable Acer Iconia Tab A500) is rather much. If I had the extra money, perhaps I‘d buy one anyway, but I would still hesitate and think long and hard. Tablets are just very young.
It’s similar to my reaction to trying the first generation Kobo eReader last year. I love the idea of an eReader, but the technology isn’t mature enough for me yet. Likewise, I love what I’ve seen of tablets so far, but I can envision them getting much better in a relatively short period of time. I imagine it’s similar to how laptops began to proliferate throughout the 1990s; I still see people using really old Thinkpads, and all I can think is, “I admire you for using last year’s model … but wow, that’s an old device.” Of course, if I took this argument to its extreme, I’d never buy any technology, because “next year’s model” is always around the corner and always better in some way.
But it all just comes back to a question of needs, wants, and opportunities. Why should I buy this year’s tablet when I don’t need it, especially if I decide next year I need a tablet and there are much better models available by then? This is not intended to be an anti-tablet polemic. I‘m sure other people have plenty of legitimate reasons to buy existing tablets at their current prices, and I don’t begrudge them that. I‘m just expressing my own mixed feelings about my lust for a technology that’s still very young and still improving in leaps and bounds. I want a tablet, but I also want a better tablet than what we have now. And since I don’t need one right now, I’m willing to wait a little longer.
For perhaps the first and last time ever, “Oxford English Dictionary” was trending on Twitter last Friday. Why? Well, aside from an overdue recognition of this authority’s awesomeness, the OED was trending because its latest update adds entries for online initialisms such as OMG, LOL, and FYI. As if that were not enough to send language purists into apoplexy, but the OED now recognizes “heart” as a verb meaning “to love; to be fond of,” in the sense of “I heart pyjamas.” That’s right: Internet diction has taken over our most beloved of English language institutions. We must draw the line in the sand and say, “Enough! This far and no farther!”
Or not. Rather than looking at this as a compromise of the OED’s purity, we could take it as evidence of how our usage of the Internet has shaped language. I admit to uttering “OMG” aloud, telling people I “heart” things, and while I tend not to say “LOL,” because I‘m not sure how to pronounce it in a way that doesn’t sound stupid, I do love me some “for the win” (FTW, for those of you playing initialism bingo at home).
As the school year draws to a close, my Philosophy & the Internet course has started looking at the Internet in terms of posthumanism. For my fourth and final critical response, I’m looking at the first seven pages of the Prologue to My Mother Was a Computer: Digital Subjects and Literary Texts, by N. Katherine Hayles. The excerpt is accessible through Google Books. We first read an earlier piece by Hayles, in which she discusses the tension between the Enlightenment-influenced attitudes of liberal humanism and posthumanism. Now she has shifted toward an examination of various posthumanist visions, with a particular emphasis on embodiment.
I’ve always been fascinated by the posthuman as depicted in science fiction. If our species survives the coming decades, I think it is the only natural consequence of the increasing complexity in our technology. For the most part, I welcome our posthumanist future, but I have always found the idea of “mind uploading” a little disconcerting. I can’t get past the fact that uploading my mind to a computer would result in having a copy of the original rather than the real “me.” Not much in the way of a rational explanation seems forthcoming for this discomfort. I don’t believe I have a soul, so it’s not as if there is something unique that I miss during the duplication process. I think it just all comes down to what Hayles is talking about with the concept of embodiment.
Whereas in her previous books, How We Became Posthuman, Hayles argued against the disembodied version of posthumanism, she notes now that:
As new and more sophisticated versions of the posthuman have evolved, this stark contrast between embodiment and disembodiment has fractured into more complex and varied formations. As a result, a binary view that juxtaposes disembodied information with an embodied human lifeworld is no longer sufficient to account for these complexities. (3)
Nevertheless, she goes on to say that she has not abandoned embodiment as an important attribute of her posthumanist philosophy. Rather, she is suggesting that the increase in complexity requires a more nuanced understanding of the role of embodiment in posthumanism. This seems sensible enough to me. After all, the technologies that are making us posthuman are as much attempts to increase the abilities and soundness of our bodies as they are a way of expanding our mentalities. I wear glasses, which are a prosthesis that improve my physical condition. How much longer will it be before it’s possible to use nanotechnology not just to correct one’s eyesight but actually enhance it behind the human norm? Returning to reality for just a moment, we already have the ability to create protheses that exceed the ability of human limbs in certain tasks. So we are becoming “more than human,” but in a very embodied way.
Hayles thus wants to examine how our relationship with increasingly pervasive technology, which is both external and now internal to our bodies, influences our understanding of reality. She brings up the example of the Computational Universe model, refering to Stephen Wolfram’s book A New Kind of Science. I find the idea that we are living inside a universe generated by computation processes fascinating, and also a little mind-blowing. (As always, xkcd has a relevant comic.) I also admit it is kind of attractive—though I’m not sure how, from a physics perspective, it might be testable. However, Hayles is neither endorsing nor refuting the Computational Universe model. Instead, she wants to prod it with a pointy stick:
I offer my own commentary on the Computational Universe, including a critical interrogation of current research claims. My primary interest, however, is not in separating the Computational Universe as the means by which reality is generated from its function as a metaphor for understanding natural and cultural processes. Rather, I am interested in the complex dynamics through which the Computational Universe works simultaneously as means and metaphor…. (4)
Ah-hah! This provides the segue into what Hayles calls “a fundamental question. What resources do we have to understand the world around us?” (5). She lists three broad categories: mathematical equations, simulations, and “discursive explanations.”
As a mathematician, I feel obligated to comment upon her critique of mathematics as a method of understanding the world. She has “little to say” except to point to others, including Wolfram, who claim that mathematics is of “limited usefulness … in describing complex behaviors” because these “typically cannot be described by equations having explicit solutions” (5). This is correct in the sense that, even if we develop the mathematics to model such behaviours, the lack of an explicit solution means we have moved from precise mathematical statements to approximations and models—i.e., we are now in the realm of the second category, simluations. Moreover, while mathematics is very good at describing underlying, fundamental systems, it’s also very arcane. That is, it produces accurate descriptions, but there is a significant investment required to understand those descriptions. Mathematics, as a tool for understanding complex systems, is limited by its own complexity.
Thus, what I think the other two categories share is a reductive capability. Both simulations and discursive explanations allow us to simplify, and we can measure the quality of these explanations by how accurate they remain, in terms of corresponding to observations, despite their simplifications. It’s easy to observe this in science classrooms the world over. We still teach kids about Newton’s laws of motion, even though Newton’s laws are wrong in the sense that they have been superseded by and are incompatible with Einstein’s relativity. You can’t derive Newtonian motion, which is absolute, from Einstein’s theories. Yet, it turns out that for very basic, simple purposes, Newton’s laws are so close an approximation that it doesn’t much matter. Hence, we still teach Newton’s laws because they have made the transition from mathematical equations that describe reality to discursive explanations of reality.
Hayles also highlights similarities and differences between these latter two categories. She cites Friedrich Kittler’s interpretation of reading as a type of hallucinatory experience:
Kittler’s proposition that reading novels is like a hallucination highlights one of literature’s main fascinations: its ability to create vividly imagined worlds in which readers can “hallucinate” scenes, actions, and characters so fully that they seem to leap off the page and inhabit the same psychic space as the readers themselves. In this respect, literature functions more like simulations than do other discursive forms, because like computer simulations … literary texts create imaginary worlds populated by creatures that we can (mis)take for beings like ourselves. (6)
This reminds me of a Neil Gaiman quotation I love:
Books make great gifts because they have whole worlds inside of them. And it’s much cheaper to buy somebody a book than it is to buy them the whole world!
And this interpretation of literature as encompassing imaginary worlds is also reminiscient of how, in some science-fiction literature, alien beings (including artificial intelligences) often have difficulty grasping the “human” concept of fiction. They do not understand that humans have developed an entire mode of discourse predicated on untruth that nevertheless refers to and can faciliate an understanding of truth. That’s us: good old, paradoxical humanity!
But Ben, you’re asking, what does all this mean in relation to the Internet? I think the Internet as an artifact might be something we can point to when we say we are already posthuman. Also, I don’t think it’s unreasonable to expect that, in the not-so-distant future, we will be able to connect our minds directly to the Internet. This would not lead to mind uploading per se, but rather an expanded mentality in which we remain embodied but aware of more than just what we perceive through our bodies.
Most importantly, however, I think the Internet is going to influence our norms of interaction, the way we perform our identities, and the way we view the identities of others. That is why I mentioned the OED’s inclusion of OMG, LOL, and “to heart” at the beginning of this post—the Internet is changing us, though we might not always be able to understand in what ways. Just as the mainstream adoption of telephones changed how we interact, now that people are using the telephone less and instead emailing, texting, and Facebook messaging, how we interact with others changes as our modes of interaction change. We even break up differently because of social media.
Don’t get too excited though. Recall that the majority of the world isn’t so tech-savvy or Internet-enabled. It is easy for us to get excited about the Internet and start examining how using it is going to change our definition of being human. What about all those humans who aren’t “network citizens?” If the Internet does become a keystone of our transition to the posthuman, does this mean that people without access to the Internet will be excluded? Will we see a “posthuman digital divide,” where part of our species becomes increasingly embedded within these technological systems and other part remains isolated? Or is it happening already?
I had a difficult time coming up with a further resource. So I hit up TED and looked for a relevant talk, and I found “Kevin Kelly on how technology evolves”. It seems applicable to the ideas we’ve been discussing for several weeks now, discussing what technology “wants” and how it develops and self-organizes its complexity. One line toward the end really strikes me:
Our humanity is actually defined by technology. All the things that we think that we really like about humanity is being driven by technology. This is the infinite game. That’s what we’re talking about. You see, technology is a way to evolve the evolution. It’s a way to explore possibilities and opportunities and create more. And it’s actually a way of playing the game, of playing all the games. That’s what technology wants.
If philosophers like Hayles are correct that we are on the verge of, if not already in, a posthuman future, then this question of “what technology wants” will be paramount.
I mentioned at the beginning of this post that I’m fascinated by depictions of posthumanism in science fiction. If you have the time, I highly recommend you pull down some books from authors who hit on posthuman motifs: Charles Stross, Vernor Vinge, Alastair Reynolds, Iain M. Banks, Peter F. Hamilton, Neal Stephenson, and of course, the inestimable Nancy Kress. What these authors do is important not because their futures are likely or even possible, but in imagining what it will be like to be posthuman, they set the stage for discussions of what we are becoming.