Tuesday, November 9, 2010

Microsoft PDC 2010

Windows Azure is the big topic of this years Microsoft PDC. I haven't payed much attention to Windows Azure in the past but after the PDC I actually felt the urge to write some code in the cloud (hilarious isn't it?).

The developer experience for Windows Azure is what you'd expect with Visual Studio 2010. The transition is easy to make, and the big difference is primarily on the Windows Azure Storage side of things. That did not sit well with me, and it took me a while to figure out that the out-of-the-box tools don't really support the development storage, which is what you use if you haven't paid for an Windows Azure account.

There's a lot of pesky things about the Windows Azure Development Storage that's not entirely obvious. e.g. there's only one development storage account (which limits you to exactly one database at any given time), it can be completely wiped simply by clicking a button on the storage service client and there is no built-in tool for managing the storage (only querying from within Visual Studio is possible). Given that I might wanna switch between different projects, I thought I'd keep my schema and data primarily in scripts, that made sense until I realized the way to talk to the Windows Azure Storage service is through this REST API. That's fine though, but not if you're expecting to write T-SQL scripts. Windows Azure Storage, is a no-SQL storage and fundamentally different from what you get out-of-the-box with SQL Server. I realize that's a good thing for writing scalable software, but it did catch me off guard. I believe I was expecting something along the lines of what Microsoft is offering with SQL Azure.

I'm interested in learning more about Windows Azure, but this sort of thing doesn't not help with that. Eventually, I would put up the money for it, (truth be told, it's not that expensive). However, it's hard to get an overview of the what you need (in terms of computing power) and the final cost when running in the cloud is pretty guess work. Though, you can scale back to whatever supports your revenue stream as well as scale up, if necessary.

It's likely that my concerns right now, will fade away with time. As I learn more about the platform my ability to make the right decision will certainly increase. Though, as of today, the first-class developer experience I've come to expect from Microsoft with Windows Azure is not quite there.

Sunday, October 31, 2010

The hardest problem with computing is people

Gilad Bracha said this during the Programming Languages Panel at Microsoft PDC 2010.

The hardest problem really is this whole issue of compatibility that plagues software, that is so brittle, you can never take things out, you can never fix things early, you can just add stuff, and therefore contribute to bloat and the problem is not so much with the technology as it is with the people. The hardest problem with computing is people, and you know they need to be replaced but we cannot do that. Ultimately we need to find ways for software to evolve and update itself in such a way that it can not only grow but shrink.

Hopefully I'm not misrepresenting Bracha or quoting him out of context when I say this.

I've been looking into software composition and clearly found great use for dependency injection. Inversion of control IoC (IoC) represents a step in the right direction and dependency injection (DI) is the means to achieve both IoC and composition. As a proof of concept of this, I was refactoring some code written by co-worker this Friday, moving things into reusable services, and surprisingly yet satisfyingly so, I'm removing more code than I'm adding. The result is not necessarily a smaller code base, but the complexity is certainly more manageable (on top of being testable). I think this is what Bracha is getting at.

The benefit is that there's no longer a tight coupling between these objects. Which is what's allowing me to actually move this code into a separate assembly altogether. If I can partition my software into isolated compartments then that's a step in the right direction. What you get out of this is a problem which focuses on the management of several smaller instances instead of one very large code base and through experience, I've learned that managing complexity on a smaller scale is preferential to a big ball of twine (intermingled dependencies).

The compositional approach is of course not free. Nothing, is. For one, a loosely coupled software architecture that is compositional requires lots of abstractions and you should be aware that there's a fair amount of work to be done to orchestrate the composition of the software itself. Some work is going to focus on writing auxiliary software to make deployment and configuration easier.

I'm convinced that composition is the way to tackle software aiming at solving a diverse class of problems in some manner which is consistent on a higher level. It enables great flexibility without sacrificing architecture, testability or maintainability at the cost of initial bootstrapping when new requirements surface.

Saturday, October 23, 2010

Windows Phone 7

After attended the Windows Phone 7 launch event here in Stockholm, Sweden. I started thinking about the Windows Phone 7 experience in a new way. Having read everything I could find on the Internet actually using the phone didn't really change my opinion. It's still a solid touch device. But one thing that became abundantly clear was that the phone can not be experienced through others.

I say this because so much of the phone experience is tied into social networking, such as Facebook and Twitter. If the phone is hooked up to your contacts it's personal and talks to you in a different way. The problem is that you cannot not buy the phone and get that experience, you basically need to rent it and spend time on it, before you can make a decision.

The Windows Phone 7 experience appeals to me differently than the iPhone and Android and if you have a chance to check it out, you should!

Monday, July 26, 2010

Moving to the cloud

I've more or less relied on my hosting company to advocate my own existence on the web. With the changes done to Blogger over the years I've been using it, it has pretty much come to a point where everything I need is in one place. All in all, it's really convenient.

I can make changes whenever and wherever I like. And there's no longer any source code to compile. Normally I'd have a small back-end that grabbed my Blogger and Twitter feeds. But as I recall I've misplaced that source tree at least three times already. With this change the source code is now up in the cloud. And if you're about to right-click and view source. I want to point out that I'm in no way responsible for the additional markup and scripts being generated by Blogger.

Sunday, July 4, 2010

System.HardwareAccelerated

Please support my campaign for adding SIMD extensions (and other hardware accelerated features) to the Microsoft CLR implementation. Follow this link and vote for this feature on the Microsoft Connect website.

The x87 floating-point math extensions have long been one of the ugliest legacy warts on x86. Stack-based and register-starved, x87 is hard to optimize and needs more instructions and memory accesses to accomplish the same task than comparable RISC hardware. Intel finally fixed this issue with the Pentium 4 by introducing a set of SSE scalar, single- and double-precision floating-point instructions that could completely replace x87, giving programmers access to more and larger registers, a flat register file (as opposed x87's stack structure), and, of course, floating-point vector formats.

Intel formally deprecated x87 in 2005, and every x86 processor from both Intel and AMD has long supported SSE. For the past few years, x87 support has been included in x86 processors solely for backwards compatibility, so that you can still run old, deprecated, unoptimized code on them. Why then, in 2010, does the CLR emit x87 instructions, and not scalar SSE or, even better, vector SSE?

I lifted this text from Ars Technica and changed one word "PhysX" for "CLR". It's totally out of context but totally applicable at the same time. The CLR is an abstraction, it has the power to emit any instruction set best suitable for any platform.

Thursday, June 24, 2010

A Big shout out to VMware

When travelling pack only the essential stuff. Well, I just down graded my MacBook installation from Windows 7 to OS X 10.5 becuase of all the driver troubles but without my development tools I don't dare leave my home.

Not surprisingly, I turned to VMware Fusion to bring the stuff over that I enjoy so much. But what's amazing is that I've set up a brand new VM running Windows 2008 R2 (which is 64-bit) on my 32-bit OS X installation and then installed Visual Studio 2010 in just under 2 hours, and it just works!

I can't possibly imagine my MacBook without VMware Fusion and I'm defintely going to upgrade to the 3.1 release as soon as the trial has played out it's role.

Friday, June 11, 2010

Operator Overloading Ad Absurdum

Short post, and nothing more than a link of what someone else wrote, but a good read, none the less.

http://james-iry.blogspot.com/2009/03/operator-overloading-ad-absurdum.html

Sunday, June 6, 2010

Microsoft, Google and Security

On May 31 2010, Financial Times reported that Google was going to phase out internal use of Microsoft Windows due to security concerns. All new employees are given the opportunity to chose between a Linux or OS X workstation. This action has spurred some rather lively discussions about what the actual reason for ditching Windows and whether it ever had anything to do with security, or for that matter, if Windows is secure or not. Ultimately, many of these blog posts, articles and rants ended up spreading misleading information about security and Windows security in particular.

With Windows being the most widely used OS. The threat against Windows is bigger than any other OS. Everybody is using it, and everybody is watching. A potential exploit can have a big success simply because enough users don't know how to protect themselves. Yet, I've read articles were people argues that Linux and OS X are better alternatives because they don't get hacked as much. This argument has utterly nothing to do with security and it's an irritating piece of information used by people to fuel the debate.

That Windows is less secure simply because it's being targeted a lot is a naive to say the least. What you should be looking at is the success rate of attacks carried out against users running Windows, which has been in steady decline ever since the release of Windows Vista. People also tend to pick on the UAC, in all fairness it is/was annoying as hell. But the truth is that it was designed that way, to pull the majority of software away from running in an elevated mode, a potential security risk.

As an attacker, if you target Windows, that's where you'll most likely find a valuable target simply because that's where normal people, like you and me do our banking and what not. What I'm trying to say is that the threat against Windows is real, the threat against Linux and OS X, not so much. I would argue that most people running Linux are more tech savvy and less likely to make the same mistakes as the average user running Windows, but you can't hold that against Microsoft or Windows because it's software designed to run software. Not software designed to protect you from danger. Windows (or any other OS for that matter) is not intelligent enough to stop you from doing something stupid (creating new vulnerabilities).

In the case of the IE6, the exploit utilized by the attack, was carried out against a decade old deprecated browser. Not Windows it self. If you allow just any software to run inside your machine, you're begging for it. What fair chance do you think a vendor like Microsoft has to protect you from a potential threat if your not serious about protection yourself? The truth is, that you, the end user, is left with a lot of power, that if you make one mistake, it can be devastating no matter the design of the OS. Many use this same example and jump to the conclusion that Windows is flawed by design. Which is absurd. There are people at Microsoft both engineers and researches that focus their entire day on preventing potential exploits from ever materializing. And they do a lot of good. But they can not stop you from pressing the red button. They can only make the warning label bigger, but typically what happens is that the user ignores the warning and clicks the button anyway. What diligent system would ever exist to prevent you from running arbitrary programs inside you computer? Isn't that the purpose of computers? to run programs? Yet, people argue that running programs are less secure by design, but that's a risk we have to take.

Security, real security has always been about establishing a network of trust. If you don't know the origin of a program, file or document, you can never presume to say it is safe. You will take a risk, every time you click something from an unknown source. And the computer has been placed in your hands to do as instructed. Think twice about clicking random things and keep your software up to date.

An interesting side note also is that computer programs, typically viruses, can't install themselves or spread between computer unless there's something to exploit. In most cases that something, is you. You opened that attachment and you downloaded that file. In the unlikely event that there was such a serious bug in a already installed program would you be at risk, but then, you trusted that program because you installed it.

Real security asks a lot of you, and you have to be willing to face these challenges if you wanna stay secure, typically the best way to protect yourself is to educate your users and make them aware of the risks. Windows is not the problem, it hasn't been a problem for many years.

Monday, April 12, 2010

The Managed Extensibility Framework (MEF) More on Part Creation Policies -- Enter the ExportFactory

Last I blogged about part creation polices and why it might be a good idea to not say anything about them, and leave them as their default, "Any". This time I'm gonna talk about the ExportFactory which is what you'll wanna use for your object factory needs in the future.

Unfortunately, the ExportFactory didn't make it into the .NET 4.0 RTM build and that's a darn shame. However, MEF being open source and all, allows one to easily rectify the problem.

You can find all relevant source here. It contains a link to compiled binaries for several .NET Framework versions and source code that you can drop into any existing framework/core assembly you have. I've done this with the .NET 4.0 RC version of MEF without much of a problem. But you'll need to configure your CompositionContainer some more.

var catalog = ... // get a catalog here
var exportFactoryProvider = new Microsoft.ComponentModel.Composition.Hosting.ExportFactoryProvider();
container = new CompositionContainer(catalog, exportFactoryProvider);
exportFactoryProvider.SourceProvider = container; // a bit odd, but necessary

The ExportFactory is used in the same way as the Lazy<T,TMetadata> type and will create instances when you ask for them. This conveniently avoids the hassle of having to manually compose objects created through the common factory pattern. The instances served by the ExportFactory will already have it's imports satisfied, so no additional steps are required. MEF makes it fun!

e.g.

[ImportMany]
public ICollection<ExportFactory<IMyObject, IMyObjectMetadata>> Objects { get; private set; }

I ran into problems when using constructor injection, if you follow the link above you'll find out how I worked around that.

Sunday, April 4, 2010

The Managed Extensibility Framework (MEF) Part Creation Policies

I'm gonna kick of a series on MEF partly for my own benefit, but maybe this will make sense to you as well. However, this won't be a beginners tutorial.

MEF ships with 3 types of part creation policies, Any, Shared and NonShared. By default all exports get the CreationPolicy.Any. What's a lesser known fact is that this allows the consumer -- imports -- to specify either CreationPolicy.Shared or CreationPolicy.NonShared.

Both attributes [Import] and [ImportMany] has a RequiredCreationPolicy property which let's you specify how the imports are satisfied. I tend to prefer to imply the correct behavior on the consumer side and not mandate that the producer has to think about it. With the default policy being CreationPolicy.Any a convenient factory pattern just imports with the CreationPolicy.NonShared. And if you're importing a collection you just don't have to worry about the composition failing, because if it's not working they're doing something wrong, it's not your problem.

Of course if you export a specific policy and import the other policy, the import is not satisfied which can lead to an unexpected exception being thrown.

Thursday, April 1, 2010

Best of MIX2010

I would also recommend Joe Belfiore presentation CL01 Changing our Game – an Introduction to Windows Phone 7 Series and the others that continue the Windows Phone 7 Series. It's a really interesting piece of technology and beautifully designed with great potential.

Monday, February 15, 2010

Sunday, February 14, 2010

Difficult != time consuming

There. I said it. Difficult problems are problems that lack a solution. Problems that can be solved, problems that have many existing solutions today, are not difficult. Not all problems are difficult, but they are time consuming and it would be preferable if they could be solved by someone else. The problem with such a solution is that things change, and when you rely entirely on other people for your solutions, you'll lose the ability to adapt in a situation when your out of the box (component) or solution can't sustain your domain. That's why you sometimes have to do things yourself, in your own way. Because no one else understands your problems that well.