Back in the early Nineties, I was working on a Ph.D applying a tool called a
Geographic Information System (GIS) to the challenge of modelling
archaeological deposits under cities. For those of us worrying about these
things, Mark Monmonier‘s then-newly published first edition of How to Lie
with Maps was required reading.
It wasn’t so much a handbook for the nefarious, as a primer for those who
wished to understand – or avoid – the traps and pitfalls so easily baked
into both physical and digital maps. A slight change in colour palette, a
shift of projection, an emphasis of this over that and a superficially
factual and accurate map flips from portraying one truth to suggesting (or
trumpeting) a very different one. Sometimes it’s deliberate. Sometimes
it’s a (hopefully!) unfortunate accident.
It’s time, I think, for How to Lie with Data. There are plenty of books
The ‘platform’ tier in the middle of cloud computing’s architecture is
being squeezed, folded and reshaped beyond recognition. Even with continued
investment, can it survive the transformative pressures forcing down upon it
from the software/application layer above, or the apparently inexorable
upward movement from the infrastructure layer upon which it rests?
To look at recent investments and enthusiastic headlines, it would be easy to
assume that Platform as a Service (or PaaS) is on the up. RedHat recently
trumpeted the launch of OpenShift Enterprise — a ‘private PaaS,’
Paul Miller's Blog
For too long, the emphasis in Cloud Computing circles has been almost
exclusively upon provision of rapidly scalable and ad hoc remote computing on
top of cost-effective commodity hardware. The Cloud play from Salesforce,
Amazon’s EC2 and the rest has been dominated by the implicit assumption
that these Cloud-based resources are an extension of the corporate data
center; a way to simply reduce the costs of enterprise computing. There is
value in this business, but there are bigger opportunities. Cloud Computing,
and the various *aaS movements, have finally brou... (more)
It’s sometimes easy to assume that the large clusters of commodity servers
commonly associated with open source big data and NoSQL approaches like
Hadoop have made supercomputers and eye-wateringly expensive high performance
computing (HPC) installations a thing of the past.
But Adaptive Computing CEO Robert Clyde argues that the world of HPC has
evolved, and that the machines in HPC labs now look an awful lot more like
regular computers than they used to. They use the same x86-based chipsets,
and they run the same (often Linux) operating systems. Furthermore, Clyde
argues that ... (more)
I was talking with Avanade’s Senior Director for Enterprise Security, Ace
Swerling, earlier today. The conversation touched on a wide range of security
and identity management issues that I’ll probably return to, but one of
Ace’s comments brought my attention back to an issue that has been nagging
at me for a while.
As I’m sure we all know, security concerns often figure highly in
discussions about moving Enterprise applications and data to the Cloud.
Indeed, I spoke with other Avanade executives earlier this year to report on
a survey they had commissioned that suggested just h... (more)