Archive for the ‘Potential improvements’ Category

Security issues and social engineering

Friday, April 28th, 2006

A couple of articles about a CD handed out in London which users installed on their work computers and then a discussion of whether the workers or the technology were at fault for allowing the security breach ends up missing the point. Both are at fault.

Schneier asks how many employees need that access. We should ask instead why do they have that access? He starts to discuss that with an example of how he is not a heating system expert but he can still manage the temperature in his home, and reflects that “computers need to work more like that.”

Sort of.

Some computers need to work more like that. From a user’s point of view, a bank teller’s world should be very limited… there’s no legitimate need to install software. This control was once in place – it was called “mainframe access” and nobody could install software unless authorized. However, the operating systems in use today on teller machines are generic (usually Windows) and have opened up all kinds of security nightmares because of their advanced capabilities.

Have you ever tried locking down Windows? Really tightly? It’s impossible – something always breaks. Try to build a kid-proof interface and then imagine keeping reasonable adults within that as well. Making a specialized interface is possible – witness all the kiosks or ATMs that run Windows NT (and bluescreen in public view) – it is just hard and not cost-effective.

A user could be tremendously more effective and efficient if they only had access to what they needed, but you then must create that for each class of users… not only bank tellers, but bank accountants and bank loan officers and auto mechanics and on and on. Right now this isn’t cost effective, but an enterprising company could figure out a process to make this easier. After all, Tivo did.

Technorati Tags: , , ,

Another Performancing dataloss bug

Wednesday, April 26th, 2006

When I was writing a new entry and wanted to look back at an older post in my log (all the way back to yesterday), I thought I would pull it up very quickly within Performancing For Firefox to see when it was posted. There wasn’t a date on the list item, so I clicked on the entry – and it overwrote everything in my editor. No save, no Undo, no warning dialog.

Bad! Always give warning!

Hopefully they fixed this in version 1.2

Technorati Tags: , ,

IE as a business failure?

Tuesday, April 25th, 2006

I have always hated how internet Explorer did not follow standards and early versions broke perfectly formed HTML code.  However, John C. Dvorak argues the bad design that I resent has had severe business implications.

This Dvorak rant has got to be the weirdest yet very compelling argument I’ve heard all year – that Microsoft made a mistake creating Internet Exploder.  Considering the problems I’ve always had with it, I would not be surprised if it was a net loss to Microsoft financials.  To point that out as a strategic mistake is a strange kind of insight…

Corner Cases cause concerns

Monday, April 24th, 2006

I’ve started using and fallen in love with Performancing For Firefox (PFF) for my weblogging. However…

I have found a couple XML API oddnesses that emphasize the importance of thorough testing. Otherwise known as bugs. One is particularly nasty since it causes failure-with-no-warning, the worst possible software misstep. Both errors are encapsulation problems, based on characters that were not anticipated or cleansed before using them in the log-post process.

The first is that one of my web logs has an apostrophe in the title, which makes the code believe a string has ended prematurely. It lists the issue as a javascript error in my debugger but does not (apparently) fail to do anything I expect… but the question I have is what should it be doing that it is not?

The second, more insidious, issue comes from one of my categories. The error reads:

unexpected end of XML entity (line 1)

When you try and submit a post, it just does nothing. No feedback, no error.  Turns out it is caused by my category ‘Tips & Tricks’ which has an ampersand… and XML thinks it is the start of an entity. Now keep in mind that these categories are pulled from the weblog by PFF automatically, and you have no way of editing them. Now, PFF can’t handle the information that it created when you are posting. Bad bad bad.

The obvious ways to fix this are to

  • escape the strings before using them
  • or show the user a different string than the application utilizes

In either case, the ’sanitize all input’ rule applies… but the most important rule in a software developer’s toolkit – fail loudly – was ignored. If you’re not preserving the data you’ve been given (in this example by not posting it to the log) or at least yelling that you failed, you’re a problem. The user may close the browser and poof – instant data loss.

One last cleanup annoyance

Friday, April 7th, 2006

So a few posts ago I tried to help Outlook avoid bad behavior and data destruction, and in return it gave me another bad error message. Here you see an empty folder, with a dire warning about all items in it being on the brink of destruction:

Trimmed Permanently Delete Error Message

It’s enough to make you cry. If there’s nothing in the folder, don’t talk about there being items and subfolders!

Outlook strikes again

Wednesday, April 5th, 2006

I’m on an email archive-and-purge mission in Outlook right now, and I found a few foibles that shouldn’t be there.  Bad user interface and data destruction waiting to happen… it must be Outlook!

I first tried to move folders with drag-and-drop into my Archive Folders that AutoArchive had created, and it kind of worked.  The folders moved but when I went to look for them, there were two!  FolderName and FolderName1 – all of a sudden I thought I was in a Dr. Seuss book.  No warning popup about a name conflict, no automatic merge, no query as to how I wanted it handled, no alert that tells me the folder has to be renamed to be moved… just a rename-and-don’t-warn-the-user evil change.  How to remediate?  Pick any of the options – but do not ever change things without alerting the user!

Second mistake, with far more critical ramifications.  Since I couldn’t move folders painlessly, I emptied their contents into the existing folders in the archive.  Then, I deleted the folders. One harmless workaround later, I was in business… until the next day.

Like any good power user, I’ve created email rules to sort incoming reports. Some went to folders I wasn’t going to use any more and had deleted, so when I adjusted my rules the affected rules were highlighted in red for me to fix.  Great!  It’s not as nice as Eudora asking how to fix the rule automatically but it is a fine way to deal with the issue. So I thought all the rules were fixed – but I was wrong.

Unbeknownst to me, the only rules highlighted were folders I had moved… the deleted folders were still considered valid targets for rules.  The next morning, reports started coming in and were distributed to the appropriate folders… already in the trash.  With no warning. One “Empty Deleted Items” from catastrophe.  However, I’m paranoid, so I went looking in the folders before permanently deleting them today – and I found emails!

Microsoft, that’s sloppy.  You want to leave folders within Deleted Items as valid rule targets, fine… I understand that some people might want that.  However, rules that point to non-deleted folders should never be automatically converted to rules pointing to something ready to be deleted. Moving a folder with rules on it into Deleted Items should be fundamentally different than moving it into a different folder.  Some options to fix:

  1. Alert the user the rules now point into the trash.
  2. Prompt the user to change the rule, even if it is to point to the folder in the trash.
  3. Automatically break the rule as if the folder had been permanently deleted.
  4. Prevent the folder move until all rules are cleared.

I personally prefer #2, but anything is preferable to the current behavior.

Website oversight on LinkedIn

Friday, March 31st, 2006

So I’m enjoying LinkedIn and it is a great way to keep in touch with some of the folks I have been in infrequent contact with. However, there’s one small niggle when inviting people… you can’t choose the source email address. I loved it when they added additonal emails, since it made my account management easier. It just wasn’t thought through all the way. Some people will have a whitelist on or know about only my secondary (business) email address but not on my home (primary), and I can’t get the invitation to send with the reply email I want.

The solution is easy: let users select which email to send from as part of the invitation. One more field and bingo! you’re good.

In addition, my apologies for not posting more recently. It’s been really busy and I will be catching up soon.

Bad Banking Error

Monday, March 13th, 2006

I recently opened an account at Bank of America, using their online application. It was slow, painful, and not a process I will do again any time soon. It couldn’t reference any of my current data with them so I had to reenter all my address and other personal information. Bad customer service, even if there are good security reasons for some obstacles.

The cherry on top of my grousing sundae came at the end of the transaction. I wanted to transfer in the money to open the account from an external bank, and I wanted to put in enough money to avoid bank fees. Easy enough, put 1000 into the first box and 300 in the second, then routing number over here and bingo!

An error message. Not the error you’d expect, though…

Fund Your Accounts

Error
There was a problem processing your request.
Some fields may have been left blank or incorrectly filled in. Please review the form to ensure that all fields are properly completed.
# The initial deposit for Standard Checking must be between 100 and 100.
# The initial deposit for Regular Savings must be between 100 and 100.

Ummm… you’re kidding, right? I can only fill in 100 for my initial deposit, not more and not less? OK, not less I understand, but you really don’t want more of my money at your bank? Isn’t that the point of the promotion that brought me to your site?

Worse, there was no such issue when I chose another initial payment method. It’s apparently a niche issue and is really bad service, as well as a QA oversight.

The punchline came when I walked into the branch to deliver my opening balance check. The account rep visibly winced as I told her a small portion of the story, and said I should come into the branch next time so they could save me time and headache. She said the system wasn’t that good.

Boy, and I thought my PR department had issues.

Windows XP crash analyst needs analyzing

Sunday, March 5th, 2006

So my laptop OS has crashed a number of times in its brief lifetime, and each time I have let the recovery program send the data back to Microsoft. Usually, this is no big deal, and I’m happy to be a good beta tester – even though I’m using Windows XP Pro, a product that shouldn’t be this buggy after service packing. However, I had an episode that shows that the user experience guys didn’t look very hard at the crash analysis/data gathering program.

I was riding BART and my machine died. I was heading through the usual screens (why does this first one default to Don’t Send? How many users will just press Enter and not care which option is used? As a software guy, at least try and increase the odds of submission…)

Crash analysis screenshot 1

Since I wasn’t online, I got the Connect First screen. No problem, I’ll send it when I get home. (again, Cancel is default?)

Crash analysis screenshot 2

I’ll just sleep the computer until I’m done with my evening event and get to an internet line……

Nope.

Crash analysis screenshot 3

So I have to leave my computer powered on in the car for hours, hoping the battery holds out, because I’m too stubborn to not report the error. Why should I have this problem? I’m pretty sure the first window lets me Standby when it is active, but if I agree to report then I am in one of those little-used-lightly-tested scenarios that should never have to happen. Really, you should never even get the second screen.

How to fix this? Easy… take a page from Mozilla or Apple. They’ve had crash reporting for quite a while that summarizes the data and queues it in the background, and I’ve never had an issue with it sending later. Nothing should be so critical to communicate in this application that it can’t wait until I connect to a network.

BART fixes the wrong problem

Sunday, February 26th, 2006

A couple days ago, the Contra Costa Times wrote a front-page article about BART providing more information about the train arrival times in many different formats – pager, email, web, phone, telepathy, etc. – and how excited riders were to get the information. One was quoted: ‘”I had to park way far away today and I had no idea when the next train was coming,” Munzell said. “If I heard something that said seven minutes, I would not have had to jog here.”‘

He’s right, but it’s the wrong problem. Look at the BART director’s quote, and tell me what’s being overlooked here…

“It’ll reduce the time you have to wait for transit or allow you to just keep doing what you’re doing for longer instead of rushing to wait on a platform for the next train or bus,” said BART board Director Bob Franklin of Berkeley. “I think it will make transit overall more convenient.”

Errr… no, it won’t. You’ll just know how long the wait is.

So another rider: “A lot of times you get here and your train is just leaving. It’s kind of frustrating.”

OK, let’s solve that problem instead. Your train just left and you missed it. What’s the best way to make you feel good?

Provide another train.

The problem with BART is the disconnect within the system’s directors and the system itself… between moving riders conveniently – their stated goal – and moving riders in bulk, which is what they keep addressing with policies to increase ridership and with goals to get cars off the road. Those are good goals and worthy of working toward, but not the ones that they tout.

So then solve the problem: run trains more frequently so when you miss a train, the next one is right behind it. This is the same theory as the London Underground uses in the heart of the city, and when I commuted on that I never cared if I missed a train. There was always another one 2-5 minutes away. BART should run every 5 minutes.

To do this, budget will always come up. The right answer is to then cut routes, always synchronize transfers, and only run two sets of trains – Concord to Daly City and Richmond to Dublin/Fremont. All stops are covered by those two routes and if you are going between routes, you transfer.

“Wait!” you say, “I was going Fremont-SF and now I have to wait longer! Not fair!”

Not so. You arrive just after work, at 5:07pm. You’ve missed the train and have to wait 14 minutes to catch the 5:21 train. If they ran every 5 minutes, you would have an equivalent turnaround with a transfer at 12th Street. The same travel time of 45 minutes, either way.

It also makes all people going on one route 5-10 minutes faster, and everyone who needs to transfer today go more quickly. Everyone wins, riders get happier, and BART gets more riders because the service is more reliable. More riders, and maybe they can add that direct route for commute hours.

Now that’s a solution BART should consider.