September 03, 2008

A lesson from Chrome

As you will surely know by now, Google launched its own browser, Chrome.

I won't discuss how it is only available on Windows (guys, most people see you like a "Microsoft alternative", wake up!) or if it makes sense or not to have another browser.
I'd like to elaborate a little on a very nice article on TechCrunch.

Chrome does not support Lively (remember? Google's Second Life). Google analytics does not know about Chrome.
If you look for chrome into Google you don't get it as a first result (ok, we know it's google's policy but yet..).

There's something we should learn from that. We have a huge company, building dozens of products at the same time, but we can see similar things happening in smaller companies with just half a dozen products. It's about development awareness: what's the rest of the company doing? How will my new software integrate with what we already have?

You cannot afford to have a "standalone solution with is somehow integrated with the rest but still" - well, unless you're Google of course. It's exactly what just happened to Google, and what keeps happening everytime a new software is launched.

You're not developing for Mainframe anymore! Start thinking about the environment. Build your external API before even finishing your GUI, think about integration before completion.

August 24, 2008

Trojanize yourself for deniability

I know this has been discussed a thousand times before (since 2003 at least!), but a recent assignment has made me think again about this. Let's just presume you're on a forensic task, and you're surfing through the suspect's computer. You end you finding the contents you were looking for, but meanwhile you start the routine antivirus scan. Ding, you hit a well known trojan.

It's password protected, and was obviously installed before the data you were looking for were downloaded.
You dig deeper, and discover the trojan will actually start at boot and be exposed to the internet.

That's it, the suspect has not lowered its security level during normal operations - assuming the trojan is actually safe and the password was hard enough to guess - and you are left wondering who has actually put that data into place. How can you tell it wasn't the remote aggress controlling the suspect's computer?

Sure, you can try to retrieve some more data to uncover the truth, but carefully leveraging this trivial issue (think about actually giving it encryted commands from time to time using a different account to confuse even a 100% sniffed wiretapping) is enough to obtain plausible deniability.

It seems too easy: I'll keep thinking about that, but any idea is really

August 04, 2008

Understanding High Availability

I've just finished a course on High Availability, more of an overview on different HA technologies on various platforms.
What I have noticed is that is really, really hard to have people understand that you cannot plan high availability as a "one night affair". Most organizations have their border routers under VRRP, and their Oracle database running on application cluster, but yet they seldom have layer 2 redundancy ( the "oh my god, a loop! kill it, kill it!" syndrome) or any redundancy on "less-important" systems.

Like an old friend said, "if it's worth having it, it's worth having it all the time". With the new virtualization techniques available there's really no excuse for not achieving HA on most of your infrastructure.

Need an easy to manage yet featureful HA firewall? Go for pfSense. You can name almost any software, an HA solution is there for free or for the time you need to build it: if it's running on Linux, then you have DRBD (150-160Mb on two bonded nics) and Heartbeat and many others, if it's under Windows you have tons of choice - not to forget a scheduled VMware converter run which might not be HA but yet it's far more than most organizations actually have.

One of our clients had an hardware failure last Friday, which resulted in a complete halt of business for the weekend. Hard to tell how much damage was actually done, but does it make any sense to work in such a way when HA solutions are so cheap?

Yes, you need skills to do HA. But what we don't need anymore in our business is IT people without skills: we already have far too many.

PS: As you might or might not have noticed, this is the first post since a lifetime. Long story short, more posts will come from now on ;)

April 12, 2008

Location aware social networks

Yet another step in the direction of tight realworld-internet integration: the number of startups proposing cell-phone based location aware software is skyrocketing. We've already discussed linkedin going mobile and the current problems of actually using cellphones to do social networking, but now the market is getting crowded.

The most straightforward use of such a network, and probably the one with the best ROI as of today, is the "Mobile dating" slice. MeetMoi and limejuice are doing it right now, but more will surely join in the future.

While some other startups are going in a "one network fits all" approach, like MobiLuck, Imity,Loopt), there is space for more specialized networks.

Think about a gaming platform, think about hobbists who seldom meet each other and so on. While a clone of facebook would likely result in a huge mess in any city as soon as it reaches critical mass, a focused application only connecting some kind of people could do the job: meeting even one new person into the "70s-singers-wearing-only-black-shirts-from-Losanna" fanclub could easily

Meanwhile the first IPhone powered social network is almost ready.

April 03, 2008

Memory overcommitting and virtualization

During March the virtualization scene, or at least the most technical part of it, discussed about memory management. What does it mean?

Imagine you are going out for a picnic with 10 friends. You could choose 3 cars or a small bus. You go for the bus so you can save a little on fuel, just like you could install 10 virtual machines on a single hardware server and save power. So far so good: but now, let's take the metaphor further: since you are going to have a picnic, you need some tools, like a barbecue, a blanket, a basket and so on. Once again, you could have one item for each person, or you could share it. In virtualization, this is called page sharing: virtual machines share memory pages with the same contents.

Page sharing is a (big) part of the memory overcommitting in virtual environment. The next part is the balloon driver. Image you have to take your coat off in your car: you take a little more space on the backseats and your friend tries to get out of the way, since he's not doing anything important. The same goes for the balloon logic: if a virtual machine is not actually using the RAM it was allowed to, it gets "preempted" and memory is assigned to another machine.

VMware, in an attempt to show how its own overcommitting is far better than the rest (say, for instance, XEN's) has shown us some tests where various instances of Windows are running various applications. covered the story as well.

So, why am I writing about that? Well, as always with tests, we have to think about them, otherwise we just oversight their real meaning. Repeat with me: overcommitting has to be tested in my environment before I can judge it and do proper capacity planning.
Why? 178 virtual machines all running up-to-date Windows and (almost) the same services will leverage page sharing! And a lot of it, I shall add. So you cannot really do capacity planning considering memory overcommitting if you, like 90% of the companies I know (and that's a lot) run different operating systems,applications and so on.

We could wonder why VMware is not showing some tests with different OSes and services.. and by now you probably can answer yourself to this question.

March 11, 2008

The best online CRM, intro

A CRM software (where CRM stands for Customer Relationship Management) is one of the central software package of any business.
What we once did by bare memory and "the human touch", today we do by using very very complex softwares (actually CRM is a strategic approach, but nowaday when we say CRM we mean just the software). Web oriented CRM are growing bigger and bigger: their ubiquity, low total costs of ownership and all the usual pros associated with web applications are very important factors when choosing a new CRM.

A lot of free or low-cost CRMs surface every day, and some are gaining a good degree of popularity. In these articles, I will discuss some of the most used CRMs, examining both the technical and the business facts, from the perspectives of both an SMB and a Freelancer.

The first one will be the VTiger / SugarCRM couple, coming tomorrow.

March 07, 2008

Can you trust a replicant? Virtualization and model checking

Nowaday it's almost impossible to be in the IT business and not be involved somehow with virtualization. Snapshots and complete control over a machine are able to speed up development and testing by orders of magnitude and are unvaluable tools in the hands of sysops and developers as well.
Tonight I've come across Virtutech, a company doing emulation of various hardware platforms. Using their words, they do virtualized software development.
Their products had me asking myself a question: can we really trust virtualized environments as being significant for our tests?
Last week I had a discussion with a colleague about building an exploit-testing machine where we should run new exploits, a simple sandbox for our lab. My colleague was arguing that using a virtualized solution could have a significant impact on tests involving direct access to memory at ring 0. I've not been able to find an answer to this argument (feel free to comment if you did) since technical insights on these details are somewhat lacking.
Model checking is a difficult discipline, seldom used in real world. Virtutech solution seems to be based on SimICS, a virtualization platform originally from SICS. SimICS is around since 1995 as a full platform emulation aimed at virtualizing embedded systems, and as such seems to be a rather reliable solution: inside its framework hardware vendors have to develope an emulation layer representing their hardware (a virtual platform).
One could ask how much reliable the framework is, and how much reliable the virtual platforms actually are. From Virtutech website:
It is important to note that a Simics Virtual Platform is a representation of the physical board/system. Virtutech does not warrant that all aspects of the physical hardware have been modeled. Consult the documentation accompanying the Virtual Platform for additional details regarding actual implementation.

That is: you cannot trust the platforms, and we're speaking about rather simple environments when compared to full x86 server systems.
So, the question is: can we really trust virtualization from a formal, rigorous viewpoint?
Would you trust a life-support machine tested only on virtualized hardware, to cut time to market?

March 06, 2008

iPhone SDK is available, enter the App Store

Hats off, this time. Engadget has blogged in real time for the whole day from the iPhone SDK press conference. The results?
  • Exchange on the iPhone. That's it, Microsoft has built direct access to the exchange server, bypassing the good old ActiveSync. I see troubles coming from this behaviour, very Apple-style, but time will tell. For now, it's a good thing.

  • The sdk. This is the news. Apple got the hint and released the complete SDK from cocoa up. We'll see how much open it really is (unlike what happened in the past). That's what community pressions are all about. Is that all, folks?

Enter App Store. I guess you all know iTunes. Ok, same idea but for applications. No charge for free applications, 30% of customer price for commercial apps, without any hint to entering fees. That's Apple for you: you don't just build a community, you start something bigger able to generate huge revenues.
I'm suspending further judgment until I can actually see the thing running, but feel free to comment: will App Store be able to change the way we use software on the mobile devices? Consider this: in Italy the entertainment contents market on mobile phones is greater than the good old music-on-cdrom market. Why? For many reasons, but

Cisco and KVM published yesterday a breaking news: Cisco will use KVM on its brand new ASR 1000 router.
KVM is a virtualization technology included in modern Linux kernels: it is the virtualization platform supported by Ubuntu and ready to replace XEN in most opensource environments as soon as it reach enough stability and usability (and possibly an user interface).
The ASR 1000 is Cisco's highest end router, costing around 35k US$, and it's the first Cisco router using Linux instead of the proprietary IOS. The ASR 1000 will leverage on KVM to provide operating system redundancy without any dedicated hardware.
While Cisco has invested in VMware in the past, and they are collaborating on the VFrame technology, the message is clear: there's no space in embedded, low fingerprinting virtualization for VMware anymore. The possibility to fine tune the operating system to its maximum and the source code availability of KVM offer unmatched advantages in such challenging high performances environments as routers and embedded devices.
We can easily expect to see more and more virtualization embedded in appliances and hardware devices: what about an antivirus box able to trace the stack of malwares running them in a virtual box, instead of the

March 02, 2008

Web 2.0 IDEs

How should we develope for the Web 2.0? That's an interesting question: as of today we lack methodologies,testing tools and a proper development environment for the web. That's it, if you go through the smoke: while any C developer can start coding and debugging in less than an hour from a clean system, most PHP developers are still stuck with echo and similar "debug stuff" from the 70s. If you look at java things get only slightly better: while you can have debugging for some part of the code, the ecosystem around J2EE is so crowded it's almost impossible to have proper methodologies.
But the real nightmare is the frontend. I know CSS/JS gurus coding with Emacs! While Emacs is a very nice operating system, it's unbelievable there's nothing better out there.
The idea of this post came from the recently announced release of the new version of WaveMaker visual studio, a "drag and drop" IDE for Ajax powered websites.
The arena of web 2.0 IDEs is full of competitors. Mind you, I will only name a few but feel free to drop me a comment if you know some more. I will do a little mixing between Ajax/Client oriented IDEs and IDES supporting server side languages, but that's exaclty the point: in the new Web 2.0 we need to use both! What's more, most IDEs are not just being, well, IDEs, but they're supporting their own framework with proprietary libraries, different standards and so on.

  • Aptana is one of the best IDEs around, featuring an Ajax powered web server and supporting AIR too ( AIR vs Silverlight anyone?).Aptana is targeting PHP and ROR, two of the most popular languages in the internet, but... surprise, no support for PHP debugging, only Javascript. So even with the advanced Aptana you're cast to the stone age of echo $debug

  • Echo2 is a framework/ide aimed at Ajax and Rich Client development. It's obviously java based, and provides a nice and easy environment for the developer. I can't help but feel a "blackbox" look around echo-based applications.

  • qooxdoo is a complete framework for Ajax: it does not require any knowledge of html, css or whatever, being a huge juggernaut with its own libraries and a development environmente completely masquerading the underlying structure. Server side, it supports PHP, Perl and Java. Did I mention there's no debugging?

  • Morfik WebOS AppsBuilder is a another complete framework for ajax, featuring a visual environment for page building and browser side debugging via FireBug. And when I say complete I mean it: Morfik is a complete RAD tool, so you are either going to love it or hate it.

  • Eclipse PDT project is an Eclipse plugin powering the development of PHP code. It's still not very mature, but will eventually support complete debugging (it does, actually, by now, but it's a little tricky to setup) and it's my IDE of choice, by the way.

  • RDT is a complete Eclipse plugin for Ruby On Rails development.Nothing to say here, it's probably the IDE of choice of most Ruby developers.

  • Zend Studio should be a bigger player. It's Eclipse based now, supporting unit testing (finally!) and proper debugging. But yet, its relatively high price is a huge stop for buyers: most PHP guys nowaday were coding alone yesterday and could not afford Studio. The result is that they don't need it now, and they probably won't tomorrow. Bad move, Zend.

  • Netbeans has a surprisingly good support for Ruby on Rails, including debugging, semantic analysis and so on.

  • 4D's ajax support is a nice addition to the 4D suite. I must admit I never quite got to know 4D, being it a little too "closed minded" for me, so I'm just mentioning it here.

But wait: how comes we are speaking about web 2.0 IDEs and we are not mentioning any IDE that is actually 2.0? Well, here you are:
  • Heroku is a feature-full, powerful and scalable ide for developing Ruby on Rails applications directly on the web. Heroku will take care of everything from giving you an IDE to actually running the applications in production. That's a tremendous improvement, but yet... you will be missing the most advanced features of a fulle IDE (debugging, call tracking and so on).

  • AppJet is a full-javascript solution: write your javascript code in their IDE and voila, it's up and running server side-.

Conclusions: while we have dozens of players and softwares, not only we're missing the ultimate IDE, but most environments don't support even the most fundamental features that programmers have became accustomed to.
Debugging, proper testing and continuos integration are nowhere to be found in the brave new web.
The next time your favourite web application goes mad, you know why.

Update: after a quick test, I've added 4D and Netbeans. Thanks go to freakface and Mickael (even if he is now using gedit).

March 01, 2008

How to use google analytics on

I'm running a soup blog for personal entertainment: is a great service for fast, quick blogging and has a great team, but it's missing statistics.
So, I've used google analytics. Here's how:

  • Create a Google Analytics account

  • Copy the tracking javascript (new version)

  • Edit your soup description

  • Enter html mode

  • Paste the javascript code inside the description, then save.

Here you are, google analytics up and running.
I think you should not edit the description again, but I'm not sure.

Can we eat the apple?

As IT professionals, we are used to love-hate relationships. We invented Perl and LISP, so we know what we're talking about. But it's seems Apple is a white cow in a black herd.
In a recent article on his blog (then blogged on Ars Technica) Vladimir Vukicevic revelead he found undocumented API in Apple's framework.
While I don't think this is malicious behaviour in itself, think for a moment about Microsoft doing the same thing, and the following reactions.
Instead, the thing went almost unnoticed.
It's hard to hate Apple, or even to be angry with that company: Apple is innovating every day, doing amazing research, and is cool whereas Microsoft is not. And I did not mention the iPhone, the iPod and so on.
But then, Apple is cheating,not releasing SDKs and in general acting like it could not care the less about fair play and the community.
How long before we realize

February 26, 2008

Platespin acquired by Novell blogged yesterday about Novell acquiring platesping.
According to Perilli, it seems Novell is interested in Platespins's Forge, its new Disaster Recovery solutions. I've blogged before about the role of virtualization in disaster recovery techniques and this is just another fact supporting my thesis. Even before everything goes virtual (which in time will happen) most disaster recovery solutions will be virtualized.

February 25, 2008

Linkedin goes mobile

Linkedin announced the availability of a mobile version of its popular professional oriented social network.
Ok, it's an improvement - now you don't need to actually exchange business card anymore. No mere: "Nice to meet you, Tom, here you are my business card".
Finally you can do something like "Nice to meet you Tom","Sure Paul, wait a minute please.. let it connect..." "I'm waiting" "..sure, just a second, here we are, logging in.." "I'm still waiting" "..and now tapping here and there and here I should be finally able to add you to.. where are you going Paul?".
You build a mobile version of a social network and still you don't embed a simple contact exchange protocol between phones? Am I asking too much for this century?

VMware shared folder vulnerability

Here we are again. Core just published an advisory about a directory traversal vulnerability in VMware's implementation of shared folders. That is, users from the Guests can read and write ANY directory in the file system of the Host.

I've blogged a lot in the past about the importance of patching and here we are again.

The infrastructure is gone, you can't have security if you don't patch the second you can, not a moment later.
And I remember somebody telling the story that the hypervisor and the infrastructure around it were so simple it's almost impossible they could have security bugs...

February 24, 2008

IPhone SDK and the importance of community developers

So, the IPhone SDK has been delayed once more. By itself it's not big news: the SDK was announced by Steve in October and it's still not here, so a couple of weeks more won't hurt that much.

I think there's space for some thoughts here, on the importance of community-powered development. Some years ago, looking at the iPhone, the only thought of any sensible person would have been: great!. And that's it. But now - mind you, it's still a "great" before anything else - a lot of people will start wondering: ok, but can I install those nifty little free apps I've grown accustomed to? What else can I do with the device/technology/platform ?

Consider recent news: the Android open SDK, Microsoft interoperability announcement, even consoles are opening to community games, something unbelievable only a few months ago. And did I mention the hundreds of wii hacks around?

Lesson learned: customers, even enterprises, do care about a platform's openess, and the possibility to develope, customize and hack will be more and more important in the future. Apple and Microsoft already got the hint.

February 21, 2008

DRAM like an elephant: breaking disk encryption

FileVault, BitLocker and TrueCrypt are widely used disk encryption technologies: we used to think about them as "rather secure" solutions, since once the computer is turned off the whole disk is encrypted and there is no way to get it back (yet).

We have even seen some esoterical devices meant to let you grab a pc without having to turn it off and thus firing disk encryption, but now the attack is on a whole new level.

It seems researcher at Princeton have succesfully retrieved the content of common DRAMs seconds to minute after the computer was turned off. No, it seems that the Gutmann's effect is not involved at all.
They have built a single purpose operating system meant to be able to collect data from ram looking for disk encryption keys, and have demonstrated they can break the encryption. Actually, once you have access to RAM there are a lot of interesting things to be found, including passwords, usernames and so on.

While the attack is actually very difficult to execute - since the attacker would need physical access to the machine seconds after it was turned off or throw it into the fridge - it is nevertheless very interesting.

More informations can be found at the Lest We Remember website.

February 20, 2008

Updating the infrastructure

The infrastructure isn't infrastructure anymore, let's face it. Infrastructure should be something you do work into: it's not supposed to change as fast as what it contains. That's what we once had.
Routers should be able to go on for years, firewalls should not need continuous patching, switches should be something you buy, deploy and forget.
This is not true anymore,and the sooner we realize it the better.
We have firewalls based on stock operating systems, and they need patching or face hacking. We have manageable switches with 802.1x interoperability, and they need updates too. We have Windows based NAS servers, which - no surprise - need patching.
Some weeks ago, Oglesby and Pianfetti posted an insightful article on about VMware patching. I agree with most of the article: hypervisors are not rock solid items with no need for patching. But, I add, there is not such thing as "evergoing infrastructure" anymore.

As the boundary between "infrastructure" and "application level stuff" gets thinner and thinner and the number of functionalities offered skyrockets we have to think about patching and updating everything in IT. Take a look at the exploits in bugtraq concerning switches, firewalls and routers and you get the picture.

Long gone is the time where you could afford to deploy things and go on with work. As complexity grows, we need new approaches at patching management.

Virtualization can do great things on the server side, but network and infrastructure virtualization is still in its childhood (despite the stories vendors tell). Even patch management in servers is a long way to go (even if some interesting tools like VMTS patch manager, xVM Ops Center and Update Manager are showing up. Until then, we'll have to rely on vendor-specific infrastructure update tools.

Many layers of infrastructure = many vendors.
1 tool per vendor + many vendors = a big mess.
We badly need a new way to think about updating: what about a centralized tracking tool able to manage, issue warning and push updates in the whole infrastructure? Does something like that exist? Let me know, or drop me a line if you want to start developing it.

The next time you build an infrastructure, be sure to ask yourself "How am I going to patch and manage this?".

February 12, 2008

Cutting the phishing rod

These days I've been involved in developing some countermeasures against an intensive phishing attack. The IT team of the offended bank is rather skilled when it comes to security, but was simply helpless against such an attack. It seems there are no defenses, since they're hitting the user, the soft spot. But yet, we had to think of some way to, at least, mitigate the attack.
So, here you are some ideas for proactive "server side" (not involving client interaction) defenses against phishing: I've got a couple more, but I have to develope them a little.

  • Pollute aggressor's data: we generated thousands of fake credentials from hundreds of proxies. The attackers will have an hard time filtering out the fakes, since they're not even logging the ip the request came from.

  • Lay traps:if you manage to have the attackers take the bait and use a fake login, you can modify your application and try to mine useful informations from their browser. Things like javascript local IP grabbing, evil java applets and such can help against the anonymous proxy they will likely be using

  • Do content tampering: if the attackers left some images with src pointing to your website, change the links in your website and switch their pictures in something else to warn the user. A huge red alert banner will do. If they were clever enough to download the images too, change yours. It won't be a huge gain, but every little bit...

  • Do image proofing: have an artist design 365 small images, one for every day. It is not easy to copy someone' style, and if the users are used to see something changing in a regular fashion they will be able to spot anomalies easily. The point is: most banks (and big stores too) websites are static in content, thus an easy prey for phishers.

The next thing I'm thinking about is automatic detection of phishing websites using a proxy. Imagine a squid component able to warn you whenever you hit a phishing site... without the need to mantain a blacklist. Blacklists are a quick, easy way out. Unfrotunately, they just don't work: most hits will take place in the hours close to the phishing attempt, when the blacklist won't be updated.That's why we need more creative, out of the box solutions.

February 05, 2008

A Linux stack on Solaris: Nexenta

I have to admit I've been somewhat skeptical about the actual uselfulness of OpenSolaris. Yes, Solaris is one of the most advanced operating system in the market with quite a huge installation base, but yet it was hard for me to see where OpenSolaris could acually fit in the open source landscape.
Some days ago I found project nexenta and I have to admit I'm impressed. Long story short, Nexenta is a comunity driven attempt to build around the open solaris kernel a nice Debian based environment... and it actually works pretty well.
Open Solaris is a very interesting product by itself, but until today I could not see how - in a world where application portability is by far more important than, say, a strong kernel - one could do without the entire Linux ecosystem.
You can maybe recompile any Linux application under OSX, but it makes no sense to do so: the same goes for Solaris.
Back to Nexenta, I have to admit that ZFS alone is worth the price of admission, allowing for transactional upgrades in a well-known Debian environment.
Next time you build any Debian (or Ubuntu, that is) based server, you should really consider Nexenta: performances are, according to my quick benchmarks, impressive, and you still are in the familiar and cost effective apt-get world.
As a side note, nexenta's commercial project (Nexenta Storage Appliance) seems very interesting too, able to run on stock hardware with all the power of ZFS and all the usual administration panels.

January 28, 2008

Android and Qtopia

As you might know, Nokia has just acquired Trolltech. You might know about Trolltech for that little thing called QT, powering KDE, Skype and many other apps . What's there for Nokia?
The easy answer could be Qtopia, Trolltech's framework for cellular phones. Checking on the Open Handset Alliance web page you will notice Nokia is not part of the "alliance". They already own half of Symbian, now Trolltech. What we will have here is, very likely, a strong competition in the OpenSource phone market. We have Android, Qtopia and - maybe - the little OpenMoko. With Windows Mobile 7 just round the corner it seems 2008 will be a very interesting year in the smartphone market, and we're not speaking about that tightly-closed and developer-unfriendly phone you all know about.
We'll see how Nokia will manage its new platform: beating Google on community support is not an easy task.

January 22, 2008

Oh, and about Apple

I know most of you are thinking about a Mac. No really, maybe not an Air, but definitely a Mac.
Because it's cool, because OsX is the best OS out there.

Well, there might be something you're missing. Something you're badly oversighting. You're moving to the next level of Hell: you're leaving Microsoft (I really don't think you are leaving Linux) for a company which manages to be even more "evil" than Microsoft.

I'm speaking about Apple patching OpenSource inspection tools so they cannot operate on Apple's software. Take dtrace and gdb for instance. They won't work with iTunes. What's more, their patch and protection is easy to bypass, even lame. Why should apple do that? Maybe ISV, maybe patents, we can't know for sure.

I'll let Adadm Leventhal "speak".

Which started me thinking... did they? Surely not. They wouldn't disable DTrace for certain applications.

But that's exactly what Apple's done with their DTrace implementation. The notion of true systemic tracing was a bit too egalitarian for their classist sensibilities

It's all about you. Think where your use of a Mac will take you in a couple of years - remember: web is the next platform - and that's it.

Web Application Firewalls

Ivan Ristic, the principal author of mod-security, just published an article about application firewalls. His thesis: this is the right year for web application firewalls... like modsecurity.

I agree with most of his analysis - webapp firewalls are really a must nowadays: web application protection is "the next big thing". But yet, I don't think misuse based application firewalls are the right answer. They need very high skills to be tuned and configured, and at the moment they don't deliver enough value for the effort required.

While a fine tuned-modsecurity can improve the security of any webapplication, the problem is that a whitelist approach is often unfeasible in complex environments and a blacklist is utterly uneffective against tricky or unknown attacks. The usual problems of signature/behaviour based IDSes.

So what? What we need is the holy grail of intrusion detection: an anomaly based web intrusion detection system. Impossible? Maybe. Necessary? For sure.

January 19, 2008

Robots lie. What about software ethics?

In a recent article Discover reports that in an experiment with genetic code and robots researchers from the Lausanne university were able to produce robots with the ability to lie. The robots, having to cope with a "food or poison" question, were able to signal poison as food to their "brothers" and the "eat" the real food while the other robots were poisoned.
While the article is missing technical details and the paper isn't available on Dario Floreano website we can easily guess that it's all about the fitness function. A fitness function is (citing wikipedia) a particular type of objective function that quantifies the optimality of a solution.

The way we choose to measure fitness will decide how an individual in a genetic algorithm behave. It might be The Selfish Gene, or not. It's up to the fitness function. In the same experiment scientists found heroes, embracing sacrifice to save the other robots.

Why is this so interesting? Because we are going to see more and more genetic software in the enterprise, expecially in decision support systems. What we choose as a fitness function will reflect in the output: will they fire someone or hire more women?
Software is going to be lesser "objective" in the future. Complexity is a factor: nowaday we can't tell "why" a Neural Network works the way it does - we can understand the output, but we can't really be sure.
We'll have to rethink about the way we interact with software and understand it. Maybe we should start thinking about ethics, not the Asimov way but in a new, business oriented, way?

January 17, 2008

Security by design

Tao security published, some day ago, an article about Defensible Network Architecture 2.0.
The main idea of the article is to start monitoring, getting a deeper understanding of a network or a complex system and then proceed to securing it. While I think the article itself is very insightful, there's something I must note: there is no Design. That's something happening more and more in the real world: the absence of design. Networks start small, they grow bigger and bigger as months pass and no one got a clue of what's happening. I'm not speaking about single hosts, firewall configurations or so on: I'm speaking about the role of IT in the organization.

If you want to claim the infrastructure, one can say, you have to understand how it works. I disagree. You need the why before the how.
If you want a real governance - and security demands such a governance - you don't have to monitor what's already there and then start thinking about security. You have to think: what kind of services does my business need? What's really important, and what's not? Only when you have such a knowledge of the purpose of IT in your organization, you can start monitoring, inventoring and controlling.Designing!

We need to get back design: complex infrastructures are simply getting out of control without proper guidance, and there's no such thing as a "quick solution".

January 12, 2008

WPF in the enterprise

I started something has an interesting post about the use of "Sexy GUI" in enterprise software, referring to lawson's smart client. It's an enterprise application with a cool design, based on Windows Presentantion Foundation. While I agree with the idea that a well designed GUI is not only an improvement, but a must have today, I think there's a big mistake here.

The world is going towards a web-centric environment, leveraging on servers and taking advantage of operating system indipendant software. It's not a matter of vendors anymore, it's something about technology: one doesn't have to be a Linux advocate to understand that there is no sense in developing a "WindowsWhatever-Bound" application... if there is no need to do so.
If you don't have to interact with local hardware, there is no good reason not to use webapps anymore.

And you can have far better graphics with far less work.

January 10, 2008

Openmeeting: an opensource breeze clone

The author of this little marvel won't be pleased by the title of this post, but that's what we have here: a fully open source breeze-like conference software. After a brief test, I can only say it's impressive.
Based on the Red5 streaming server, it can be easily deployed in an opensource environment: from what I can see, it's nothing short from a fully functional conference server.
You can find a live demo and full download of OpenMeeting here.

January 09, 2008

Jook and social music

I've discussed before about crossing the boundaries between internet social network and the real world. Jook is another attempt at it.
Imagine in the metro, or at a station, complete with broadcasting, profiles and feedbacks. That's jook.
Jook itself is a protocol specification (I won't discuss security... for now) to be implemented by hardware vendors: the final product could be a small gadget to be connected to an IPod, Zune or similar device.
People with Jook can listen to what the other user is listening to, provide feedback, access profiles and so on. It is going to be a great tool for small bands, marketers (how long before the first ads, if it manages to reach the critical mass?) music lovers and, why not, researchers. Memes never had such a way to spread before.

How long before the first RealLife/SocialNetwork gateway gadget is produced?

GNOME on cellphone

As a follow up of my previous article on opensource hardware, an interesting news. OpenMoko has just announced FreeRunner, its new Linux-based cellphone.
It will be aimed at the general market (while the Neo was more developer oriented) and has very interesting features. Oh, and there's no need for unjail software.
While I am not sure there's still space for a linux based device with Android around, having a Linux powered device is still very interesting.

January 06, 2008

Neuros OSD and Open Hardware

The New York Times has an interesting article about Neuros OSD. The point here is that the general public is getting more and more interested in the whole "open" world. We have to rethink the way we relate with devices: being part of the open world will be a must in the next couple of years.
Think about it, next time you buy a gadget.

January 04, 2008

Enterprise Social Computing in 2008

The FastForward Blog just published a nice article about the use of social networks in the enterprise, something I already wrote about in a previous post. While I agree with part of the article - I am expecting Sharepoint and its ecosystem to skyrocket in 2008 too - I'm more optimistic about the adoption of social networks.

I think no project can start without the need to solve a business problem, thus social networks will be implemented for a reason.
Cross selling or team building are good candidates, but it's likely that early adopters will want to target specific business problems, as Armstron says. I don't think we are going to see any "Facebook for the enterprise" implementation at all.

As for delivering value, it's all about perspective.In my opinion a social network won't do any good for the efficiency of a process. A social network can, on the long run, give you a significant boost on effectiveness: anyway, it will not be possibile to measure benefits but after months, if not years.

It's a big bet, but I'm pretty sure companies who are "all about people" (consulting firms, for one) will seriously start thinking about social networks in 2008.