June 15, 2012 By Bill Schrier
This week ICANN announced they had received 1930 applications for new top-level Internet domains.
ICANN is the Internet Corporation for Assigned Names and Numbers, usually pronounced “I can” and with no direct relationship to “Yes We Can”.
Today the Internet has just 22 such domains, including the familiar dot-com and dot-gov (which we all know and love). ICANN knew more were needed, especially non-English ones and some using non-Latin characters. So, for a mere $185,000 each, anyone was invited to apply for new domains, with no guarantee they’d be granted. The full list is here, and includes a number of familiar and quite a few unique proposals.
For city names, ICANN rules specified the City government had to at least acquiesce to the application. There are just a few proposals for City names, such as dot-NYC, dot-Boston, dot-Miami and dot-Vegas, plus some overseas ones such as dot-Paris (which could, I suppose, be contested by Paris, Kentucky). And only one of the United States applications appears to be from an official City government, dot-NYC. See more details on this at Government Technology news here.
Do cities really need their own domain names?
The City of Seattle was approached by at least one company seeking to apply for dot-Seattle. Their proposal was, basically, to put up the cash for the application, and then manage the sale of the names, presumably to individuals and companies who wanted the brand, such as microsoft.seattle or schrier.seattle . The City would receive a portion of any income beyond the cost to administer the domain name.
We didn’t pursue the opportunity for several reasons. Chiefly, I didn’t see how anyone would want to type microsoft.seattle when microsoft.com was shorter and easier. How much would Microsoft pay for that domain?
This logic would definitely be true for individuals or small business as well. Certainly some businesses might want a dot-seattle brand, but would there be $185,000 of such sales?
Furthermore, in order to pursue this, the City of Seattle really would have to issue an RFP and give other companies an opportunity to manage dot-seattle for us.
Hey, RFP’s are a lot of work.
Do Cities need their own domain names? Well, even at $185,000 plus management costs, dot-NYC makes sense. Maybe dot-Vegas will work. Dot-minneapolis or dot-wallawalla ? Naw, I don’t think so.
March 18, 2012 By Bill Schrier
The project mantra is clear: "scope, schedule, budget". But how we actually do the planning, estimating and getting approval to start a project … well that's the horse of a different color.
We promise the moon - "Project Widget will be the best thing for this department since sliced bread - it not only will slice bread, but will knead the dough and grow the yeast and self-bake itself". Then, of course, instead of delivering sliced bread we might end up delivering half-a-loaf, or maybe an electric knife or perhaps a chopped salad. This problem: getting the project’s scope right.
Then there is schedule. Of course every project is a "priority". We're going to get it done in the "next nine months". Why "nine” months? Because that's less than a calendar and budget year, but it is longer than saying it will be done tomorrow, which is patently ludicrous. But nine months is also ludicrous for anything other than incubating a baby - and even babies usually take years of planning and preparation. Furthermore, in the public sector almost every procurement has to be done by RFP, and preparing a request for proposals alone, plus contract negotiations with a successful vendor, cannot be done in less than a year. And the schedule needs to include minor components such as business process discovery and the work of executing on the project.
Then there's budget. Generally we'll make a pretty good estimate of the actual real cost of the project. The usual mistake is for someone (fill-in-the-blank - "department director", "Mayor", "county commissioner", "state legislator", "grand phooba") to say "we only have x number of dollars". So, as the next step, the project budget shrinks to the magic budget number, while scope and schedule are left unchanged. And generally the "magic budget number" is determined by some highly scientific means such as the amount of money left over in a department budget at the end of a fiscal year, or the amount of money the City of Podunk Center spent on a similar project, or the size of a property tax increase which voters might be reasonably persuaded to pass.
Why do we plan projects this way in the public sector?
First, we are largely transparent and accountable in government. That’s really good news, because we – government – are stewards of taxpayer and ratepayer money. Oh, I suppose we can hide some small boondoggles, but there are too many whisteblowers and too much media scrutiny to hide a major failure. That's not true in the private sector, where projects costing tens or hundreds of millions of dollars are failures or near failures, often hidden from public or shareholder view, with wide-ranging and sometimes near catastrophic economic effects. Some public examples include Boeing's 787 Dreamliner or the Microsoft Courier tablet (gee, will anyone every produce a Windows tablet?) The federal government’s project failuresare paramount examples of both poor project planning/execution and admirable transparency with an eye to reform.
Here are my top reasons for project mis-estimation:
And here are my top cures:
What's amazing is that, despite everything I've said above, we get an amazing amount of great projects completed. At the City of Seattle, we’ve tracked all our major projects. Since 2006, we’ve tracked 77 project through 2,071 project dashboard reports. We’ve found that, when they are completed, 75% of them are within budget. Of those 77 projects, 32% have been on time and 57% have delivered the scope they promised (i.e. a whole loaf of sliced bread). Clearly this record reflects our priorities – budget is the most important consideration, with scope second, and schedule lowest.
Not a bad record when compared with Standish group project failure statistics, but plenty of room for improvement.
January 21, 2012 By Bill Schrier
The Seattle area and I just went through a four day snow/ice storm event. The City of Seattle’s emergency operations center (EOC) was activated and coordinated the City government’s response. That response received high marks from the public and media for a variety of reasons (see Seattle Times editorial here), including the leadership of Mayor Michael McGinn.
I was able to personally observe that response and lead the technology support of it. Information technology materially contributed to the improved response, nevertheless I see a number of further potential enhancements using technology . And that’s the purpose of this blog entry.
GIS GIS GIS (Maps) Every city, county and state is all about geography and maps. Maps are the way we deploy resources (think “snowplows”). Maps are the way we understand what’s happening in our jurisdiction.
Everyone who has lived and traveled inside a city can look on a map and instantly visualize locations - what the “West Seattle bridge” or any other street, infrastructure or geographical feature (think “hill”) looks like.
For this storm, we have some great mapping tools in place, especially a map which showed which streets had been recently plowed and de-iced. This map used GPS technology attached to the snowplow trucks. That same map had links to over 162 real-time traffic cameras so people could see the street conditions and traffic. (Other cities, like Chicago, have similar maps.)
Another useful map is the electrical utility’s system status map, which shows the exact locations of electrical system outages, the number of outages, the number of customers affected and the estimated restoration times. This is really useful if you are a customer who is affected – at least you know we’ve received your problem and a crew will be on the way.
What could we do better? We could put GPS on every City government vehicle and with every City crew and display all that information on a map. That way we’d immediately know the location of all our resources. If there was a significant problem – let’s say a downed tree blocking a road or trapping people – we could immediately dispatch the closest resources. In that case we’d typically dispatch a transportation department tree-clearing crew. But that crew might have to travel across the City when a parks department crew with the proper equipment might be a block away.
This same sort of map could show a variety of other information – the location of police and fire units, which streets are closed due to steep hills and ice, where flooding is occurring, blocked storm drains, as well as water system and electrical outages.
This “common operating picture”, across departments, would be enormously useful – as just one example, the fire department needs water to fight fires, and it needs good routes to get its apparatus to the fire and perhaps it would need a snowplow to clear a street as well.
Obviously we wouldn’t want to show all of this information to the public – criminals would have a field day if they knew the location of police units! But a filtered view certainly could be presented to show the City government in action.
Perceptions and Citizen Contact
A lot of media descended on Seattle this week. Partly that was due to the uniqueness of the storm – it doesn’t snow much in this City. And perhaps it was a slow news week in the world. A lot of news crews filmed inside the EOC. The Mayor and other key department spokespeople were readily available with information. This is quite important – the television, radio and print/blog media are really important in advising the public on actions they should take (“public transit to commute today, don’t drive”) and actions they should avoid (“don’t use a charcoal grill to cook when you are without power”). Our joint information center (JIC) was a great success.
Mayor McGinn’s family even contributed to this – his 11 year old son filmed him in a public service announcement about how to clear a storm drain of snow and ice which is now posted on the Seattle Channel.
What could we do better? We need better video conferencing technology, so the Mayor and senior leaders can be reached quickly by news media without sending a crew to the EOC. This video conferencing would also be quite useful in coordinating action plans between departments with leaders in different locations. In a larger, regional, disaster, such capability would allow the governor, mayors and county executives to rapidly and easily talk to each other to coordinate their work. It is much easier for anyone to communicate if they can see the visual cues of others on the call.
Also, Seattle, like many cities, is a place of many languages and nationalities. We need to have translators available to get communications out in the languages our residents speak. This might include a volunteer-staffed translation team but at least could include recording and rapidly distributing written, video and audio/radio public service announcements in multiple languages.
Commuting and Telecommutnig
In these emergencies, many people elect to use public transit – buses and trains for commuting. (I actually took my “boat” – the water taxi - to work twice this week.) Yet snowstorms are also the times when buses jackknife or get stuck in snowdrifts and going up hills. In this emergency, the coordination between the transit agency (“Metro”) and the City was quite improved, because we had people – liaisons – from each agency embedded with the other. This allowed snowplows to help keep bus routes clear and help clear streets near trapped buses.
And, with recent technology advances and sorta-broadband networks, many workers can now telecommute. Seattle had few outages of Internet service this week, although in suburban areas trees and snow brought down not just power lines, but telephone and cable lines as well causing more widespread Internet issues.
What could we do better? The easiest and most useful advance, I think, would be GPS on every bus and train and water taxi boat. That, combined with real-time mapping, would allow people to see the location of their rides right on their smartphones. If we deployed it right, such technology might also show how full the bus is and the locations of stuck buses. This sort of technology would be useful every day for public transit users – but is especially important during snow emergencies.
Another huge necessity – which I’ve advocated often and loudly – is very high speed fiber broadband networks. With fiber broadband – and Gigabit (a billion bits per second), two way, telecommuting and tele-education becomes really possible. Kids could continue their school day with video classes even when schools are closed, you could visit your doctor, and of course citizens would have access to all that emergency information and maps described above, real time and two-way. I could go on and on about this – and I have – read it here.
Crowdsourcing and Two-Way Communications, Cell Phones
This area is the most ripe for improved technology to “weather the storm”. In any emergency – even a minor disaster like a major fire or a pile-up collision – just obtaining and distributing information early and often will have a significant result in managing the problem.
On-duty at any time, the City of Seattle may have 200 firefighters, 350 police officers and several hundred to several thousand other employees. Yet we also have 600,000 people in the City, each one of which is a possible source of information. How could we get many of them, for example, to tell us the snow and ice conditions in their neighborhoods? Or perhaps to tell us of problems such as clogged storm drains or stuck vehicles? The Seattle Times actually did this a bit, crowdsourcing snow depths from Facebook.
How can we “crowd source” such information? I’m not exactly sure. Perhaps we could use Facebook apps or Twitter (although not a lot of people use Twitter). Two-way text messages are possible. Any one of these solutions would present a whole mass of data which needs to be processed, tagged for reliability, and then presented as useful analytics.
Eventually, of course, there will be whole armies of remote sensors (“the Internet of things”) to collect and report the information. Perhaps everyone’s cell phone might eventually be a data collector (yes, yes, I’m well aware of privacy concerns).
In the meantime, we should have some way citizens can sign up for alerts about weather or other problems. Many such systems exist, such as the GovDelivery-powered one used by King County Transportation. I’m not aware of such a system being used two-way, to crowd-source information from citizens. There are also plenty of community-notification or “Reverse 911” systems on the market. The Federal government is developing CMAS, which would automatically alert every cell phone / mobile device in a certain geographical area about an impending problem or disaster.
Furthermore, during this Seattle snowstorm, many City of Seattle employees – including police and fire chiefs and department heads, used text messages on commercial cellular networks to communicate with their staff and field units. This continues a tradition of use of text messaging during emergency operations which first came to prominence during Hurricane Katrina.
All of these solutions depend, of course, on reliable cellular networks. We know during disasters commercial cellular networks can easily be overloaded (example: 2011 Hurricane Irene), calls dropped and cell sites can drop out of service as power outages occur and backup batteries at the sites run out of juice. Yet, for people without power or land-line Internet, a smartphone with internet is a potential lifesaver and at least a link to the outside world.
I’d like a way to easily collect this information – privately – from the carriers so emergency managers would know the geographies where mobile networks are impacted.
This leads me, of course, to my final point – that we need a nationwide public safety wireless broadband network. Such a network would be built using spectrum the Congress and the FCC have set aside for this purpose. It would only be used by public safety, although – as our Seattle snowstorm underscored, “public safety” must be used broadly to include utilities, transportation and public works – even building departments. And it would be high speed and resilient, with 4G wireless technology and backup generators, hardened cell sites.
These are a few of my thoughts on better management, through technology, of future snowstorms and other disasters, large and small, both daily and once-in-a-lifetime ones. What have I missed?
January 3, 2012 By Bill Schrier
They have to work.
All the time.
During power outages, hurricanes, earthquakes.
When every other wireless network is dead.
So they have to be built, maintained and operated by government, right?
Or else they cannot be trusted, right?
That's the way cities, counties, regions, states and local governments have ALWAYS built our radio networks for police, firefighters, emergency medical response, utilities, transportation, public works. And with good reason.
Historically (by that, I mean "before cell phones"), most radio networks were really unreliable. They were used to dispatch taxicabs and for citizens' band radio ("CB") by amateurs. But no government would trust such a radio network to dispatch cops or firefighters. Such networks had dead spots, lots of static, and dropped off the air entirely when the electricity failed.
With the rise of commercial cell phone and, later, smart phone networks, such networks became … well … "really unreliable". Even today many people are angered and upset by dropped calls, "all circuits busy" and slow-loading (or "never loading") pages. And during any large event - a packed stadium for a baseball game, or a major traffic jam, a windstorm or an earthquake, you might as well use your phone as a camera, because you probably won't get through to make a call.
When you're being robbed at gunpoint or having a heart attack, do you really want the first responders coming to help YOU to depend on such networks? That's why, as I’ve blogged before, "cops don't use cell phones".
But building government-owned radio networks is REALLY expensive. A public safety voice network requires just a handful of sites - say 8 radio sites for Seattle or maybe 30 for all of King County here in Washington State. However, to rebuild those networks today, and to build the new high-speed data networks for responders’ smart phones, tablets and computers will take dozens - perhaps hundreds of sites to cover the same geography. And THAT takes hundreds of millions of dollars.
Hello - we're still in the midst of the Great Recession, right? Government budgets are pinched left and right - sales tax, income tax, property tax revenues are all falling. While the private sector is still hiring, many governments are laying off employees. There are few dollars available for hundred million dollar networks.
Is there a middle way? Is there some way governments could take advantage of the hundreds of existing cell phone sites developed for commercial networks? Perhaps a way the commercial networks could take advantage of fiber optic networks and buildings or radio sites owned by government? And some way we could make the cell phone networks more secure, more resistant to terrorism and natural disasters, and therefore more reliable for public safety use?
Here in Seattle, we think so.
We think we might be able to start with all the assets which taxpayers have already bought and paid for - the fiber and microwave networks, radio sites, backup generators, skilled technology employees, and our existing investments in radios and computers. Then we would add equipment and cell sites and other assets, along with expertise and innovative ideas from private sector companies - telecommunications carriers, equipment manufacturers and apps developers. Mashing these together, we might get a private-public partnership which gives consumers and businesses more reliable, faster mobile networks, while giving responders new, state-of-the-art networks at a fraction of the cost of building them from scratch, like we've always done before.
That's the idea behind a request for information (RFI) issued by the City of Seattle several weeks ago seeking ideas about private-public partnerships for next generation networks. We need some great pioneering “outside the box” ideas in response to the RFI.
And then, perhaps, we can build a modern, smart, network in the Central Puget Sound which saves everyone money, and works reliably during disasters small (“heart attack”) and large (“earthquake”).
P. S. All these ideas are not mine. In fact, to some extent I’ve been hauled kicking and screaming (or maybe shuffling and whimpering) to look for a middle way. Let’s give credit to Deputy King County Executive Fred Jarrett, United States Chief Technology Officer Aneesh Chopra, elected officials like State Representative Reuven Carlyle and Mr. Stan Wu of the City of Seattle for “coloring outside the lines without falling off the page”.
November 13, 2011 By Bill Schrier
It is the season for Ghosts. We've just finished celebrating the spirits and Ghosts of All Saints Day, All Hallows Eve and All Souls Day. Soon we will be visited by the Ghosts of Christmas.
Information technology has its own Ghosts, and we government technologists have our special subspecies of technology Ghosts.
We all know about technology Ghosts. The story of the ill-fated Microsoft Courier tablet, doomed to be stillborn, has been haunting the news feeds again lately. HP's Touchpad and (maybe) WebOS were given up to an existence someplace between the living and the dead (tech Zombiedom?) earlier this year. Whole technology companies and technologies have become Ghosts or are destined for slow, lingering deaths and a future ghoulish existence. WiMax, once the darling of 4G wireless networks, is all-but-dead in favor of its big brother long-term-evolution or LTE. Steve Jobs is widely hailed for bringing Apple computer back from a Ghostlike doom; his role creating the Ghost of NeXt is less celebrated. And companies like Digital Equipment Corporation (DEC), once the #2 computer company worldwide, fell into the dustbin of tech history, being purchased by Compaq which in turn was gobbled up by HP. It sure seems like RIM and its successful BlackBerrys may be headed down a similar path.
As I mentioned earlier, Government has both Ghosts-in-common with commercial companies and our own unique set of Ghosts.
Most government computers are haunted by the Ghost called Windows XP. Ten years old, declared "dead" by Microsoft, Windows XP is still a workhorse in many agencies, as we struggle to make sure our myriad of applications will work with Windows 7, and we try to find the dollars to upgrade. At least the Windows XP Ghost will be fondly remembered, unlike Windows Vista, which hopefully has a home someplace in a tech Hades. Mainframe computers, and especially the IBM mainframes, are alive and well, working hard in some places. In governments, however, too often they house almost-Ghostlike tax systemsor scheduling and management systems for Courts, applications which are old and creaky but mainstays for some cities, counties and states.
Some applications are wraiths, staying long beyond their normal useful lives, because they are both functional and beloved by users. Northrup-Grumman’s PRC is a public safety computer-aided dispatch system, "green screen" and command-line driven. Dispatchers and field offers became familiar to its arcane but quick-to-type commands, and memorized them. Newer dispatch systems were Windows and gooey (Graphical User Interface, a term I never liked) based. But to do the same one-line-of-text PRC function on a newer GUI system often would involve opening multiple windows, drop down boxes, address verification functions and other tasks which vastly lengthened the time to dispatch a police unit or fire call. It took dispatchers some time and training to exorcise these Ghosts.
Analog public safety radio networks are another Ghost which many cities, counties and regions use today. The counties of the central Puget Sound, including Seattle, presently have older Motorola analog public safety radio systems - in our case with over 20,000 vehicle-mounted and handheld radios. These systems are functional and critical to dispatching fire, police and emergency medical officers to every 911 call and incident. Yet they are based upon 6809 chip architecture. The 6809 chip was used in the Tandy Color Computer, which was in its heyday around 1978-1980. Talk about Ghosts - what other technology from 1978 is still functional today? Such systems won't be supported for much longer (just like Windows XP) but upgrading or replacing them will not be easy or cheap. Yet, unlike cell phone networks, these 6809-architected systems have been extraordinarily reliable, often with 99.999% uptime.
I'm sure readers of this blog (if there are any!) probably have your own favorite technology Ghosts, many of which may still haunt your support staff and data centers - don't hesitate to leave a comment and describe them.
And, alas, many of these Ghosts are hard to exorcise for many reasons - lack of budget for the replacement, many interfaces and dependencies, and just plain old fear of change "if it ain't broke, don't fix it". In many cases that means the data used in these ancient ghostly systems is locked up, and hard to interoperate with or interface to other, more modern systems.
In a sense, I'm also a technology "Ghost" of sorts, I guess, spanning the time from the first Apple II computer to the iPad of today, from the Apollo moon landings to today when the little netbook computer I'm using for blogging and tweeting has more computing power than the entire Apollo system had in 1969.
But this last "Ghost" – Bill Schrier - is not going away anytime soon!