Feeds:
Posts
Comments

Unpacking Some Boxes

I was asked to make one final post here when RSS feeds were exposed on the new site. Well, the feeds were always there, just not readily visible on the page. If you were using a browser that was feed aware, it would have detected two feeds on the site. However, that’s not really all that friendly. Not having the usual feed icons on the page was one of those “unpacked boxes” that I talked about. Well, I’ve updated the site design a little, and now those pesky feed icons are prominently displayed.

This time, this really should be my last post on this blog.

Moving Again

Yes, I know, it’s not been that long since I moved from Windows Live Spaces to here, and there’s not a lot of content here. However, I’m still moving again. This time I’m going to self host so that I have complete control over the site. So, if you’re following my rantings, follow me on over to http://www.digitaltapestry.net.

Greener pastures…

don’t exist.

I just read a rant by Davy Brion that made me feel like I had to post my own thoughts on the subject. Davy did preface his post with an explanation that it was a rant, and I appreciate that, and it does mean I’ll cut him a lot of slack here. After all, I often rant with the best of them, and in some ways this very post is my own rant. However, there’s enough fallacies in his rant that I just can’t let it go.

One of the most important goals of every piece of guidance and tooling that they provide is accessibility. Lower-end developers should be able to use their products and their guidance and be able to build software of an acceptable quality.

Davy must never look at the Patterns & Practices guidance, then. This guidance is certainly not accessible to “lower-end” developers. Microsoft is so often criticized for ignoring the “higher-end” developers, but that’s simply not true. They just don’t always target them. Having worked in plenty of other development communities, for good or ill that’s true everywhere.

Please note that I’m not trying to give Microsoft a complete pass here. There’s a complaint hidden within this larger complaint that Davy is making that has some truth. Microsoft does have a problem with sometimes providing poor guidance, no matter what level of developer the guidance was targeted at. However, I think more often than not the community makes a mountain out of a molehill here. Often the “guidance” isn’t guidance, but simply code of questionable quality written by a Microsoft employee, or is sample code meant to illustrate a very narrow and specific point which would be lost if enterprise level best practices were followed, or any number of other scenarios. Then there’s areas where there’s simply debate as to what “best practices” really are, where it’s fine to disagree but it’s not really black and white as to whether or not the guidance provided is “bad”.

There is a huge difference in quality between the higher-end .NET developers, and the lower-end.

Absolutely! But what’s being inferred here is that the differences are not so great in other developer communities. Having worked in a very diverse set of communities, from mainframe developers to Windows developers to Unix developers, from .NET to Java to C++ to Ruby to Python to PHP to ECMA Script, I can tell you that this is simply not the case. The difference in quality between lower-end developers and higher-end developers in all of these communities is huge, and the lower-end developers far outnumber the higher-end developers across the board. Sorry, greener pastures don’t exist.

I found it extremely telling that Microsoft is capable of putting resources on products like WebMatrix and LightSwitch (both of which are targeting the very-very-lower-end developers, or even non-developers) while at the same time, they are severely cutting back the resources for projects like IronRuby, IronPython and the DLR (which drew more interest from the higher-end developers than the lower-end developers).

I share some of Davy’s sentiments here. However, I know that this is irrational. First, WebMatrix and LightSwitch aren’t really bad products. They target a far different audience then what most developers, much less higher-end developers, fall into. However, that audience is real, and has always existed. I share frustration over this… I’ve had to “maintain” and “rewrite” using proper development tools and methodologies more than my fare share of programs written using these types of tools, and it’s frustrating and painful. However, I’m experienced enough at this point to acknowledge that these “small, quick, dirty” applications written by non- or low-level developers using RAD tooling provide real business value and meet a real need.

The apparent cutting of IronRuby support/development funding really annoys me, and I think is a terrible mistake. However, I highly doubt there’s any relationship to this announcement and the WebMatrix and LightSwitch announcements. It’s just coincidental timing, and not any indication that Microsoft is changing an emphasis towards more low-level developers.

My rant (and please note this is not directed at Davy): I’m growing sick and tired of the negative tone coming from many in the .NET community, especially those in the blogosphere who consider themselves “top tier” .NET developers. To listen to them, the .NET community is entirely populated by idiots, who are idiots because Microsoft makes them that way, who will never learn because Microsoft doesn’t want them to learn. Further, according to them, Microsoft tools are always the worst possible tools you could choose to use. Without conviction, they tell you that this is a “.NET/Microsoft problem”, and that all other development communities are so much better. Well, I’ve worked with and in those other communities, and if you really believe that’s true, do the .NET community a favor and try those other pastures. Like the cow from the fable, you’ll find the grass isn’t any greener over there, and maybe you’ll learn and grow from that experience and come back to help make our community better.

I ranted a while back about how Visual Studio 2010 took one step forward, and 3 steps back with regards to the Add Reference Dialog. Well, Microsoft has released a Productivity Power Tool for Visual Studio 2010 that has almost gotten this down perfectly!

The dialog has received a complete makeover.

image

Now the Assemblies information is cached. The first time I opened this dialog, I was presented with a progress bar as the cache was built. Subsequent usage has utilized the cache, making the dialog nearly instantaneous to display… even after shutting the IDE down and bringing it back up! Note, also, that there’s now a search option! Pretty much everything we were all asking for.

The only thing I’m curious about is how you refresh the cache. There’s no refresh button, and F5 doesn’t appear to do anything. We’ll have to see over time how this works out, but I’m very impressed.

The Productivity Power Tool is worth installing for this feature alone, but it contains several other features as well. The “Tab Well UI” gets a makeover, with lots of new features. I don’t think anyone can complain about how the tabs work after this puppy is installed, as you can configure it to work in just about any way you can imagine. HTML Copy will obsolete so many blog writing plug-ins, allowing you to paste an HTML representation of code copied from the editor. You can configure the editor to highlight the “current line”. Selecting a line can now be done with a triple click. Mixed tabs/spaces can be quickly corrected. Ctrl+Click will take you directly to the definition. Ctrl+Alt+] will align your assignments. IntelliSense gets a color coded syntax makeover. Alt+Up and Alt+Down will move code up and down. And finally, guidelines get first class UI support. Many of these features have been available in separate extensions, and I’ve been using most of them and couldn’t live without them.

What is MVVM?

It’s a flying bat: ^^vv^^.

Sorry, I know that’s corny, but I still have to rail a bit against the name.

Anyway, Model-View-ViewModel, or MVVM for the TLA (FLAB?) loving folks, or just ViewModel for those that don’t like tongue twisters but want to pay homage to the “original” name, or Presentation Model for those Java folks, or even MVPoo for Drs. and folks fed up with the various names, is a UI design pattern. The pattern separates the display (View) from the presentation state and logic (ViewModel) and the business data and logic (Model). That’s it. Nothing more, nothing less.

So, why am I blogging about this if that’s all I have to say about what MVVM is? Because lately I’ve been reading a lot of claims about MVVM that lose track of what MVVM is, making claims that are not strictly accurate.

Let me start with one that came up in the WPF Disciples group recently. Justin Angel posted an review of Josh Smith’s Advanced MVVM book. In his review he talked about how MVVM was “essentially about these 3 things:”

1) Data Binding

2) Commands

3) VisualStateManager

See, that one rankled several folks. When MVVM was “discovered” by John Gossman, the VisualStateManager didn’t even exist. Also in those early days we had problems bending Commands to MVVM usage, as this was before RelayCommand, DelegateCommand et. al. Heck, even Data Binding isn’t an essential part, as Martin Fowler lamented about having to write boring and repetitive code to synchronize state, and even said he hoped that some day something like .NET data binding could be used. See, all of these things are implementation details, and patterns are not about implementation details. They’re about, wait for it, the pattern (see, it’s not just a clever name).

Another one that came up recently on the Disciples list was a discussion around a blog post someone had run across. This was a short post, but it generated a lot of discussion about various things. I won’t go into the details, because it was a private discussion. The blog post made two mistakes that are relevant here, though, so I’ll discuss them. The first was a similar mistake in trying to tie MVVM to implementation details like Commands and Dependency Injection. I’ve already pointed out how that’s wrong, so let’s move on to the second mistake. He stated that when following MVVM the goal was to have no code in the codebehind. This just isn’t true. Having a clean codebehind is a goal I have, since it helps with the developer/designer separation, and this goal is possible because of MVVM (and other patterns and techniques), but this isn’t a goal of MVVM. You can, and I some times do, put code in the codebehind while following a strict MVVM pattern.

Then there’s another topic that also came up in the Disciples list that’s a particular bee in my bonnet. Several folks tried to make the claim that in MVVM the ViewModel should never have a reference to the View. That’s just hogwash. Nothing in the pattern states this. What is true is that you should avoid creating a hard coupling between the ViewModel and the View, but that’s just good software architecture and has nothing at all to do with the MVVM pattern. What’s funny is these same folks have no issue with creating hard coupling between the ViewModel and the Model, which causes the exact same issues as hard coupling between the ViewModel and the View and so is just as ill advised. But there’s not a darn thing wrong with having weaker coupling, and there’s lots of reasons why you might want to have the coupling. For instance, imagine your loose coupling entailed an IView interface that exposed Show and Hide/Close methods to allow the ViewModel to control the visibility/existence of the View. Do you see anything wrong with this sort of coupling? I sure don’t. More important, do you see anything in the pattern description that discusses this? I sure don’t.

Another claim made several times by many people is that MVVM is a refinement of Presentation Model using WPF concepts like Data Binding and Commands (this argument usually being made to justify keeping the longer and more awkward name of the pattern). See, this is just a variation on the other mistakes I’ve discussed. If this statement were strictly true, then MVVM wouldn’t be a pattern, it would be a framework based on the Presentation Model pattern. Remember, patterns aren’t about how things are implemented, they are strictly about the, ahem, pattern.

The Add Reference dialog has been the biggest punching bag for Visual Studio detractors. It’s always been extremely slow and annoying, and pretty much every developer has felt the dread associated with knowing you’re going to have to add a reference to a .NET project.

So, with Visual Studio 2010, this was supposed to have been one of the areas of focus. My initial reaction was that the tweaks made here made a huge difference, and actually made the dialog usable.

Well, I have to say, my initial reaction was incredibly wrong. Yeah, the changes made a huge difference, but not for the better.

The dialog now displays nearly instantaneously. If you’re adding a reference to a project or by browsing to an assembly on disk, then the changes are going to make your heart sing. However, if you’re attempting to add a reference from the list of known assemblies, well, the old dread holds nothing to the new feeling of pure doom.

See, by loading the list asynchronously, populating the UI with every item as it’s found, they’ve actually made it take longer to load the list. Worse, because the UI is updated with every item found, it’s not really usable until the entire list is loaded. There’s nothing more frustrating than being presented with a long list of items, inviting you to scroll to locate the item you need, only to have the scroll bar jump around like a Mexican jumping bean as new items are constantly being added. This behavior is especially vexing because you don’t know when the list is finally fully loaded, and since it takes longer to load then it used to, you’re always frustrated by the jerky UI.

This is so frustrating, because this is a problem that’s really easy to solve. Loading asynchronously is the right choice… but this could be done as soon as the IDE is opened and the results cached. The list should only be populated once all of the items have been added to the cache. The user would then be able to refresh this cache if, in the corner case, there were some change to the actual assemblies that would be available. Problem solved. The user will rarely have to wait for this stupid dialog, and will never be annoyed by an unusable UI.

Boy, I hope they fix this before shipping. 😦

I’ve been hard at work on a new version of Onyx. I’ll write another post later explaining why I’m “starting over” and what the new goals are, as that’s not relevant to this post. Since I am starting over, though, it’s given me a good opportunity to switch to using .NET 4 and the new technologies available with it. One of the more exciting new features that I’m very excited to utilize is the new Contract system which brings Design by Contract (DbC) to the .NET world.

DbC was first implemented in the Eiffel programming language, and was discussed in length in the book Object Oriented Software Construction by Bertrand Meyer. DbC is a design and development methodology intended to try and ensure software correctness. I’ve been intrigued by DbC ever since reading about it in OOSC about 10 years ago, but since the idea really requires language support and so few languages include this support (Eiffel being the only one I’ve even written a “hello world” program in), I’ve never really had the opportunity to “use it in anger”. Now that DbC is going to be available in .NET, I can finally get some real world experience with it.

So, what is DbC?

DbC is a way to design, build, document and verify software based on the idea of contracts. In the legal world a contract is used to specify what two parties agree to exchange (goods or services). In DbC contracts are used in the same way. The consumer of a routine agrees to provide certain input, while the routine agrees to provide certain output. The stronger both sides of this contract is specified, the more robust and reusable the code will be. When the contract is codified in the software, it’s possible to use tools to statically verify the correctness of the code, to produce unit tests, and to provide runtime exceptions when the contract is violated. When the contract is well documented, it’s easier to write code that consumes it. All of this leads to higher quality code.

In DbC, contracts are codified using special assertions in a declarative fashion. Methods are decorated with two types of assertions: preconditions and postconditions. The preconditions specify what a caller is required to do, while the postconditions specify what the method will ensure at the end of the call. These assertions are specified using predicates. In .NET 4 these assertions are specified using calls to methods on the static Contract class. The Requires overloads are used to specify the preconditions, and the Ensures overloads are used to specify the postconditions.

Now’s a good time for an aside. I made two claims earlier about DbC that seem to be contradicted by what I just said about how you specify assertions in .NET 4. First, I said that assertions are declarative, while the use of the Contract class appears to be imperative. Second, I said that DbC requires language support, while the Contract class appears to require only library support. I didn’t really get either of these claims wrong, it’s just that Microsoft found a very unusual way to support DbC in .NET. They use an IL rewriter to modify the binary output after it has been compiled. This allowed them to introduce DbC to all .NET languages, even those that don’t have built in support for the concept, while retaining backwards compatibility.

Because assertions are really declarative, despite the novel implementation provided in .NET 4, there’s some restrictions on how you are allowed to write them. They must be placed at the beginning of the method, and need to follow a certain order: Requires then Ensures.

OK, that covers the declarative nature, but what about the claim that language support is required? Why would language support be necessary? We’ve been using library based assertions for a very long time in nearly every language imaginable, after all.

Well, to understand this claim, you need to think about inheritance. Type inheritance obviously requires language support, and contract inheritance isn’t any different. The assertions for a base class method apply equally to overrides in a derived class, and that’s nearly impossible to achieve without language support. Well, the IL rewriter provides this same support without changing the language. Neat, huh?

We should discuss contract inheritance a bit. See overrides may not have the exact same contract as the base. However, the contract needs to be “compatible”. Thinking hard about this, we can come up with some rules. Let’s first think about the preconditions. If the preconditions are identical, we obviously have no problems. What if we add to them (making them stronger), though? This won’t work. Using polymorphism, if a derived instance is passed to a method expecting a base instance, it could violate the stronger requirements because it knows nothing about them. What if we remove some requirements (making them weaker) instead? This is fine, because when the method taking a base instance obeys the base requirements, it will still meet the derived class’s requirements.

Now, let’s think about the postconditions. What happens if we add new conditions (making them stronger)? This is fine, because we’ve still met the base requirements. What about removing conditions (making them weaker)? This doesn’t work, because we may not satisfy the base requirements.

So, in theory at least, we can weaken the preconditions, and strengthen the postconditions. However, in .NET 4 they’ve only allowed us to strengthen the postconditions. The justification they give is that “in practice” there’s rarely a need to weaken the preconditions, and supporting this concept is too complicated. I’m not sure if this justification is valid or not, as I’ve little experience with DbC, but that’s what we have.

So, now we have preconditions and postconditions. There’s a third type of assertion that’s very important: object invariants. Object invariants are assertions about the state of the object between calls to methods on it. These assertions must hold before and after every call to instance methods on the object. In .NET 4, object invariants are supported by decorating methods that specify the invariants (through calls to the Invariant method on the Contract class) with an attribute (ContractInvariantMethodAttribute).

So, there’s a lot of theory. Next, we’ll try to look at the practice of using DbC in .NET 4.

New Toy

The family just took a vacation, spending an entire week in an indoor water park and resort. Specifically, we went to Kalahari in Sandusky, OH. We had a great time, though I don’t think I’ll go down any more “toilet bowl” slides.

Anyway, before leaving for this trip, my wife decided we needed a laptop. We own an aging Toshiba (don’t recall the model), but it’s painful to use. It’s that old. See, as much as I’m into this tech stuff, we’re kind of tight with our money. In this case, though, the wife really wanted to stay connected while we were gone on this extended vacation, so as long as we watched the budget, it was time to purchase a new laptop.

Well, we found a deal. Got a Toshiba Satellite U500 at a really great price. I’ve always liked the Toshiba line of laptops (our first one actually got dropped down a flight of stairs, while running, and suffered no damage), so it was pure luck that while bargain shopping with a single day to make a purchase, the one we found was a Toshiba. Even better, this one has a multi-touch screen. I must say, after using this one, I don’t think I’ll ever own a laptop without a multi-touch screen again. I’ve never plugged a mouse into this one, and I’m quite happy about that!

The U500 is a good size. Having bad eyesight, I didn’t think I’d like a screen this small, but honestly, it’s just about perfect. It’s got a rugged design, with a quality feel. It’s also packed with features. I’ve really been enjoying using this thing, even if I do still prefer the desktop experience (may have to look for a multi-touch screen for my desktop now).

I’ve even managed to get some Onyx vNext coding done on this thing, which is something I never could have done with the old laptop. 🙂

OK, this is too cool.

My current pet projects, Specificity and Onyx, both require desktop and Silverlight support (well, Specificity is getting this support soon). The typical way of accomplishing this is through project links. You create two projects, one for the desktop and one for Silverlight. You create physical source files in one of the projects, then in the other you create project links to the other file. A project link is like a symbolic link in file system vernacular, but done strictly within a project definition. If you need the code to be different between the platforms due to differences in the platform libraries, you either use conditional compilation via #if/#endif blocks, or you use a partial class with platform specific code in a file not shared with the other project.

All of that works fairly nicely, but it’s a bit of a maintenance PITA and can be rather error prone. Forget to create the link, and it’s possible everything will still compile, but you’ve got a “bug”.

Well, I just found out that there’s a tool that automates all of this within Visual Studio, provided to us by the patterns & practices team. It’s called the Project Linker: Synchronization Tool. Having just found it, I’ve not yet been able to use the tool “in anger”, but you can be sure I’ll be checking it out real soon.

By the way, if you’ve followed me here from my previous blog on Spaces, welcome to my new home. Hopefully I’ll no longer have to worry about blog spam here! However, being new to this blog, you’ll have to excuse me for a while as I settle in and get things working the way I like.

Update: Near as I can tell, this is where you grab the tool from: http://www.microsoft.com/downloads/details.aspx?FamilyID=387c7a59-b217-4318-ad1b-cbc2ea453f40&displaylang=en#filelist.

I’ve Moved

OK, after the rant in my last post, I’ve done it. I’m not going to be posting to this blog anymore. I’m moving over to WordPress at https://digitaltapestry.wordpress.com.