This user hasn't shared any profile information
Home page: http://tom-servo.net
Posts by saphetiger
I installed the Windows 8 consumer preview yesterday. My first impression of it is that it is clearly not meant for desktop/laptop computing. If this an attempt to unify the desktop/tablet operating systems, and if this is anything to go by it will be a resounding failure. This is not to say that I find it without merit. I just don’t think that many desktop users will find it palatable. The basic ui elements seem to be at a disadvantage if you don’t have a touch screen.
First let’s look at the start menu. I won’t argue that there aren’t some aspects of the old start menu interface that couldn’t stand to be improved, but I will argue that the changes in Windows 8 aren’t what was called for.
First there is no start button. Instead a mouse gesture to the lower left hand corner brings up the start screen. On this screen you will see groups of large clunky banner icons. Now let me step back for a second. On my Windows 7 machine, I probably have 70 software titles installed, how much room do you suppose that will take on a 1920x1080px screen. Let’s compare this to the start menu that takes up perhaps 30% of your desktop realestate. “Not all users have that many applications installed,” you say. I’ll digress.
So on this start screen you’ll have access to all of the installed applications. Some of them will open full screen and some will open in the desktop view. If it is one of the apps from the “store”, it will open full screen, which you can’t control and can only be switched to and from by the metro launcher, but at least we’re free of that pesky task bar right?
WRONG! If you launch a traditional desktop application, it’ll open in the desktop view and you can only get back to it through the traditional task bar there. Note: unlike the other apps it will NOT be available through the metro launcher. “So, you’re saying that if I switch applications there are 2 places that it might be?” Yes I am. In the days of hi-res monitors do applications really need to be full screen or not at all? Apparently the answer is yes.
Let’s talk about switching applications for a moment. So in the days of old, since Windows 95, switching applications was a straight forward process. Click in the button you want and it comes to the front. As long as you’re in the desktop view, this still holds true. However, if you use one of the new fangled tablet like apps, this doesn’t remotely hold true. It takes 3 mouse movements to get back to what you want to do. You have to go to the lower left hand corner wait for start to show up then go to what you were doing then click on the app. If it was in the desktop view, you then have to click again to get back to it. This isn’t something that many people would think too much about, but I guess that’s my point. You shouldn’t have to.
Let’s talk about closing applications. In order to close an application you have to bring up the metro launcher and right-click and go to close. I’m not even sure how one would do this on a tablet. They could have better handled this better a lot of different ways. They could have put in a gesture that would have brought up the close option, or they could have just had the desktop view handle everything. They could have just put in things to make this view accessible to tablets. Once again, I digress.
Getting to the control panel is tricky, and there is a stunning lack of customization possible with the new interface. You can change the pattern to a list of pre-provided patterns and use one of the preset colors. You can’t change the location, size, or orientation of the metro launcher. I have not found any customizations to streamline the interface. There are no hot keys that you can bind to make the unruly interface behave as desktop users have come to expect.
I’ll continue playing with it. To be fair it’s in a very beta stage, and Microsoft may fix some if this as development continues. So far this is yet another case Microsoft ignoring how people interact with their software. As things stand it seems that they have glued two operating systems together in a jumbled up mess.
I’ll add some screenshots here shortly to demonstrate my point.
So, I recently started working in a Microsoft laden environment. This place is crawling with Microsoft technologies from active directory to sharepoint. I noticed a seemingly trivial, but I feel, important trend in Microsoft’s software designs. They seem extremely hesitant to embrace tabs within their interfaces.
Now I know it sounds trivial, but consider for a moment that companies like AOL, Netscape, and Mozilla were infamous what you could do with tabs. Even Microsoft’s competitors like IBM with their Sametime messaging system, or Lotus Notes their mail/database system. Netscape stole market share from Microsoft with it’s decision to introduce tabs in it’s browser (Netscape that’ll take you back). AOL introduced tabs in it’s instant messaging client years ago. Why doesn’t communicator have that same functionality. Thunderbird and evolution both have tabbed reading pains. The ability to integrate tabs into a window or pull out again in Chrome, is frankly essential. My point here is that this is not a new idea. Microsoft has had time to implement this feature.
Now, I think it’s indisputable that Microsoft markets these tools to professionals. People that don’t have time to have one thing up at a time. These people might realistically have fifteen to sixteen windows up (typically on relatively small low resolution monitors). Now my observation has been that the applications that are most responsible for this rampant misuse of desktop real estate are typically applications from the office suite. A user may have five emails open. There might be five people contacting him via communicator. Now their talking about two word attachments that got sent to the wrong place. In this scenario there are twelve windows up when there only needed be three.
My one request to Microsoft is this, borrow a page out of a ten year old playbook, and consider implementing this ancient but essential feature.
In recent weeks, everyone has become aware of the massive attack that Sony incurred. The exploitation of a commercial network like that is very noteworthy. Many people are angry that they can’t play their games online, some are upset that their usernames, passwords, and other identifiable information have been exposed. Obviously something like this can hardly be considered a good thing, however I think this brings to light some issues that will become increasingly relevant in years to come. I honestly believe there are some valuable lessons can be taken away from this.
Sony, I’m sure, has learned the value of pen-testing. While the nature of the exploit hasn’t exactly been clearly revealed, most accounts point to a simple exploit. Careful pen-testing might have revealed this. Hopefully, Sony has learned a valuable lesson about the responsibility a company has to protect their customer’s information.
From most accounts Sony will have to spend billions to rectify this situation. While I would like to be quick to point the finger at Sony, I’m not entirely convinced it’s warrented. I think that perhaps that it is fortunate that this happened to a company that can likely withstand the backlash that this will create. I sincerely hope that this has opened the eyes of company executives everywhere. This could have been any company. I’m not entirely convinced that Sony’s infrastructure was inherently less secure than any other retail operation on the internet. While this is certainly a large security breach, I’m relatively surprised that there haven’t been more. The scale of this breach serves to make it more visible, which I think will lead people to take these issues more seriously. If it happened to a smaller firm that could more scarcely afford the monumental cost that this will incur, it may not have been more than a foot-note on the back of the news paper, and a 30 second new spot on some local news channel.
The Play Station Network is a relatively closed thing. Typically most of the network is only accessible from a few types of devices. While I probably should do some FAQ checking on this, presumably transmitted over SSL. There seems to be some question whether the credentials are hashed properly prior to transmission, however, there is a clear effort toward security. Mal-ware and viruses are difficult to develop for game consoles, and are exceedingly rare if existant. A key-logger would be next to useless on a PS3, being that most people don’t connect a keyboard and mouse, so it would take a long time to map key presses to meaningful information. So they had a right to believe that the client end was relatively secure. They were right here.
Now we can debate the effectiveness of those meaures, however, my purpose is not to dictate that their measures were sufficient, as they clearly weren’t, but to compare them with a traditional web vendor. There are a lot of web applications with weak authentication mechanisms, vulerabilities to SQL injection, or all manner of other nasty vandalism. In Sony’s case, it appears that the failure was server side. Most Windows based machines are prone to all of the threats that game consoles are specifically resistant to, but I don’t think most PC based vendor’s are much better secured. Ecommerce, contrary to predictions of many analysts in the 90’s is not a fad, and is not going anywhere. I would bet there are a large number of web-based vendors who have not put sufficient thought in their security strategies. It would greatly benefit them if they were proactive in their efforts rather than reacting to a breach. What does client security have to do with it, you might ask. Well, simply put if one account on the site is comprimised (ahem…like a developer account, admin account, or just an account with a credit card number) it’s often possible to use that account to get others. Examining their service’s weaknesses, not only for the benefit of their clients, but to hedge legal liability only makes sense. Please don’t mis-understand me, obviously there needs to be a balance between security and functionality, however this does not mean that security should be cast to the wind.
Ok, so I’ve talked about web-based vendors, and their role. Are they solely responsible? I really don’t think so. I think this should be taken as a cautionary tale to users, shoppers, and web-service subscribers alike. I think this is a call to consciousness about what we authorize to keep on file for us. Maybe it’s worth the extra minute and a half to enter your credit card again. One-click purchasing is certainly convenient, however, I think consumers need to balance the value of this convenience against the value of what it would cost if their information were compromised. Once again, that balance needs to be struck between security and functionality, however I have to wonder how cognizant people are of the information they put out there.
I just got back from Notacon. It was a lot of fun. I was surprised about which talks were interesting to me, and which ones weren’t as interesting as I thought they’d be. As usual, there was much celebration and drinking, and I walked away feeling as if I’d learned something. Unfortunately for Notacon, Thotcon ran at the same time. This left Notacon missing some of the key personalities that some of us had grown accustomed to seeing. This was still a great time.
One of the most memorable presentations that I saw at Notacon was about a project called PK4A. This was a project conducted and completed by a hackerspace in Toronto called Site3. They used this opportunity to talk about installation art of the famable vareity. This is a fascinating concept. Basically you can use propane tanks with gas fittings to create fantastic flame art. They gave an in depth overview of the design of their burning man installation which consisted of a central heart surrounded with veins and arteries that would shoot flames in human touch. They brought their PK4A project to showcase the principles in that installation at a much smaller scale. Basically it was a remote controlled torch. The torch itself was constructed of a propane tank and some gas pipe/fittings. The activation mechanism was an arduino controlled solenoid. The solenoid released gas out of the valve at the top of the installation, which was then ignited by the pilot light. While we didn’t get to see it in flaming action, it was still a cool concept, and the solenoid switch was thoroughly demonstrated.
Prior to now, I was not even aware that fire installation art existed. I guess that goes to show that I need to spend more time on the internets. I think I might experiment with this some (probably to the chagrine of my neighbors).
I can’t wait for Notacon 9!!!!!!!!!
I am finally hopping on the arduino band-wagon. I purchased my first Arduino. I went with the duemilanove, because it is standard and cheap. Almost all of the available shields are designed to fit on top of it. Most of the documentation that I’ve read for Arduino projects assume this is the model that you have. While I am not arguing this is the best model, it is definately worth the $15 investment that I have in it.
I have been working with a Boarduino with the same Atmega328 processor on it for a couple of weeks now. It has a couple of cumbersome points that make it a lot less convenient than it’s Duemilanove counterpart. For one, it does not have auto reset. This means that every time I want to load a new program on it you must time the reset so that it is ready to accept the new instructions once it has come back up. The Duemilanove doesn’t have this dis-advantage. It is smart enough to accept the new sketch (arduino program) and then automatically reset. As far as I can tell they both have the same pin-out. The Duemilanove just has it built in and the Boarduino has to sit atop a bread-board to have the same functionality. I can potentially see this as an advantage as you could put the Boarduino directly on top of another prototype board and have it run the show. The other drawback that the Boarduino has over it’s counterparts is the expensive TTL cable that is required to interface with your PC. Most of the other arduino counterparts I’ve worked with thus far have utilized a standard USB cable to upload new sketches to them.
Overall, I’m very excited about all of the exciting project opportunities that this new investment will present for me. I am very exited to see the size of the development community that seems to surround the arduino project, and all of it’s derivatives. I foresee myself, and my fledgling hackerspace (Cow-Town Capacitor) developing with this platform for a long time to come.
I love the idea behind the prototyping system. Build cool stuff, and don’t worry about the integrated circuits until production time. At that time, you will already pretty much have the firmware ready for your project and all you have to do is make it.
I recently went to a convention that had representatives from hackerspaces all over the country. This was very exciting to me as I was only vaguely aware of what a hackerspace was. What an exciting idea!
First let’s take a look at what a hackerspace is. A hackerspace is a group of people and a space that they provide for that has the tools, and know-how for any kind of project. Usually these people are hobbyists from all walks of life that want to explore aspects of science and technology and value learning from each other. Projects range from anything from chemistry to electronics even open-source medicine. The possibilities are endless.
I found a number of people that had found uses fro the Arduino device. This is a small device that can be assembled by any electronics hobbyist. It features a USB port or serial port and can be used to control a lot of simple devices. People even used them to modify their con badges with flashy LED displays and other modifications.
I met an idividual whom is working on an open-source electron microscope. This is a really cool idea. Right now as it stands the average person isn’t likely to gain access to that kind of equipment without a grant from a major institution. The equipment itself costs tens of thousands of dollars and is very unwieldly. This gentleman and his associates are working on a way to make this tool available for just over $1000. This would make it accessible to enthusiests and hobbyists that before could not utilize an electron microscope. He made a very valid point in his presentation that a lot of discoveries were not made by highly financed researchers in an expensive labratory, but in their homes and businesses.
There are many examples of this kind of discovery. Alexander Bell invented the telephone in his home. The wright brothers engineered powered flight in their bike shop. Thomas Edison, while he was highly financed, did not have a lot of the issues that today’s researchers have to deal with. All of these things have effected the world in a pretty profound way. The question begs to be asked what new technologies will we see as a result of continued innovations of this type.