Thursday, June 29, 2006

Emitting Code, Mind Your Language

{

Caught sight of a post over at Jeff Atwood's Coding Horror blog in which he demonstrates some of the clunkiness of using object libraries while coding. His example is as follows:

Let's say you wanted to generate and render this XML fragment:

<status code="1"><data><usergroup id="usr"></data>

He then shows the 17 or so lines of code it takes to do it with System.Xml objects and shows an alternative with Response.Write and a String formatter:

string s = @"<status code=""><data><usergroup id=""></data>";
Response.Write(String.Format(s, myCode, myUserGroup));

What he was getting at seemed fairly intuitive but a firestorm erupted over his berating those who considered the use of Response.Write and other more terse approaches over object models as second class or "dreadful citizenry."

Of course the big pitfall of his example is that it can be easily broken if the variables used in a Response.Write approach to generating xml contain unescaped or "bad" charactes. The object proponents jumped on this and hammered at how stuff like this works for a "trivial" example but it leaves an aweful exposure for "real world" applications. I'm always annoyed by that word "trivial" because those that use it tend to abuse it for the sake of complexity and overengineering. On the other hand, if you look at my previous post, I'm no stranger to bad characters and encoding issues (trust me, the pain has been dealt!). But I think what this discussion points to is something that is near and dear to my heart: language itself.

It didn't take long for a Ruby coder to point out that generating something like this in Ruby, using the libraries was as simple as:

builder = Builder::XmlMarkup.new(:target=>STDOUT, :indent=>2)
builder.person { |b| b.name("Jim"); b.phone("555-1234") }

Which generates:

<person>
<name>Jim</name>
<phone>555-1234</phone>
</person>

And along came a Perl programmer who shows the original example as:

$x = XML('status', [code => 1],
['data', [],
['usergroup', [id => "usr"]]]);

I share Jeff's frustration in the heavily object oriented approach to doing things with traditional .NET libraries - but I think it points to two things: first, the libraries we get sometimes are a clunky "one size fits all" offering that leave the developer writing a lot of line noise for a simple result, and second that while the language itself may not be the initial culprit (I think the two shorter examples above could be duplicated in C#, for example) the thinking one acquires from their language, libraries, and best practices is why System.Xml makes some developers think that 17 lines of code is the best for a "trivial" example of generating some xml.

This is why language is so interesting to me: the way you think is overwhelmingly influenced by how you express yourself.

Where does that discussion leave me? Wanting to know more about Ruby and work on my fluency in Perl. I may have to write that C#, but I want more flexibility in thinking.

}

Wednesday, June 28, 2006

Querystring Drama

{

Apparently it's older news - a security feature from ASP.NET 1.1 that protects the webserver from "dangerous" querystrings that throws exceptions for illegal characters like angle brackets and so on.

We happened upon it when trying to pass information from a VB6 application to a web app by putting together a querystring and appending it to the target page like so:

http://foo/bar.aspx?id=29oeiu&0209&wweoi9239

The garbled bit after the ? was assembled by doing a little bit of shifting of character codes. It doesn't really qualify as encryption but it seems daunting enough for the casual user and there's really not enough text for a persistent person to find much of a pattern. Unfortunately as you happily shift those ascii codes you run into the "<" and other characters that ASP.NET 1.1+ is not happy about. It makes the case for doing proper encoding but even true URL encoding (done by putting a "%" with the upper and lower 4 bits in the character byte encoded as hex) will not absolve the developer of the application server's squeamishness. For example, the URL encoded value for the left angle bracket, "<", is %3c. For the right angle bracket, ">" it is %3e. So if you think that you might get away with a URL that should be http://foo/bar.aspx?start=<&tag=h1&end=> by formating it as http://foo/bar.aspx?start=%3c&tag=h1&end=%3e you will be sorely disappointed. Sorely.

My colleague and I then thought we'd be a little clever - okay it was my [bad] idea - and just use a constant character for one of the angle brackets but this also fell short since it's not just angle brackets that are considered dangerous.

So what was the fix? Step one is to leverage the might of Google and search on the error message. Then we discovered an attribute to the page processing directive that turns this feature off - it was almost too easy:

ValidatePageRequest="false"

Of course this puts the burden of securing the querystring on the developer but there are a few perks: first is that it only applies to the single page when placed as a directive. The second is that the vulnerability is squarely on the developer leveraging the querystring for some back end command functionality. That's easy to narrow into some clean validation logic.

}

Tuesday, June 27, 2006

Starting Ruby

{

John Lam's RubyCLR project piqued my interest even before I saw him at TechEd. Today was my first step to getting started, here were my steps.

1. Played a bit with a free online Ruby interpreter.
2. Got the single click installer for my own machine.
3. Read a bit of documentation.
4. Then discovered Komodo, the IDE I originally bought for Perl stuff supports Ruby and has an interpreter. Very, very cool.

Next step: language fundamentals.

}

Thursday, June 22, 2006

Mac Twit

{

Seth Stevenson at Slate on Apple's new ad campaign writes:

"Mr. Mac comes off as a smug little twit, who just happens to carry around a newspaper that has a great review of himself inside."
}

Safari Redesigned

{

I'm a rabid Safari user. I think they built their whole business plan for me personally... okay, that might be a bit arrogant but at least I know I'm not alone in my rampant desire for technical books and my need to constantly keep an updated library for the problems/platforms at hand.

Initial thoughts:

1. They've done a great job segmenting the pages to load in pieces so browsing through books is much faster. There seems to be heave leverage of Ajax involved in that process.
2. Search is much, much better. Autocomplete when you enter search terms, faster response time, and a slider control that lets you sort by popularity or by relevancy. Very, very cool.
3. Expand / Contract left navigation (table of contents, search results)
4. Printing is better - instead a popup it's a link to a page with just content. The javascript to automate printing is still there but it doesn't automatically close the browser window when complete which is nice.

I'm sure there are other improvements but getting search and printing to work better for a site offering online books is key.

Just put Ruby in a Nutshell on my bookshelf, preparation for RubyCLR.

}

Wednesday, June 21, 2006

Virtual Earth

{

I'm not sure what day I'm on as far as TechEd coverage (a lot to swallow!) but the next session I see notes for was called "Location Solutions with Virtual Earth." It was a tour of Virtual Earth and how to write applications using the javascript library they expose in the Virtual Earth SDK.

You can get an idea of what Virtual Earth looks like by going to the site: http://local.live.com

The SDK is also quite simple; it consists of referencing some javascript libraries and using a DIV to position the mapped content. For example, to get the basics, just view source on this page.

The Virtual Earth API has the following features, per the presentation:

  • User Navigation
  • Mapping Imagery
  • "Bird's Eye" views
  • Pushpins
  • Find Places/Addresses
  • Directions/Routing
  • Polylines
  • Collections of GeoRSS

For each of these features there is an interactive SDK that generates example code so that you can follow along for your own application. *Very cool*.

I did have a few issues with timing and synchronous processing; in other words sometimes the wait time for processing one command seemed to cancel out a command that directly followed it. In my case I was trying to do a search for "Sioux Falls, SD" and then follow it with a ZoomIn method like so:

map.FindLocation('Sioux Falls, SD');
map.SetMapStyle(VEMapStyle.Hybrid);
map.ZoomIn();

I'll dig around a bit - my idea for a helpful little location solution is a map of Sioux Falls and all the coffee shops along with their attributes: how good the coffee is, whether WiFi is free, and the general atmosphere.

I'm also planning to dink around with the Google Maps API to see how it stacks up.

}

Monday, June 19, 2006

BillG Review

{

Joel Spolsky writes about being reviewed by Bill Gates himself. Self congratulatory, but good nevertheless...

}

Sunday, June 18, 2006

Pragmatic Architecture

{

Cathi Gero and Ted Neward had a session on what they call "pragmatic architecture" - the fuzzy definition of which was architecture that maintains ideals until the consequences outweigh them.

It was interesting to get this high level at TechEd; most of the sessions I went to were very technical where this was a balance between philosophy and a general approach to developing software.

One of the first thing they went after was the recent infatuation people on the architectural level have had with "design patterns." According to them, a cut and paste approach to design patterns is not the path to success - and although I haven't immersed myself in the design patterns world, I doubt it's proponents probably leave some room for flexibility in implementation.

According to Cathi and Ted, the goals of a software architect were twofold:
1. Functional Requirements (Business needs, raw functionality)
2. Non-functional Requirements (the "ities": scalability, maintainability, extensibility, and so on)

Their approach in being pragmatic about architecture was to leverage a vocabulary, not as a means to accomplish the goals of software architecture, but as a lexicon for strategies and tactics in accomplishing those two main goals. They covered what they called the 6 elements of architecture and examples of approaches (patterns?) that lend themselves to the software of the solution.

Here are the 6 architectural "elements:"

1. Communication
2. Presentation (the front end)
3. State Management (how the data are stored)
4. Processing
5. Resource Management
6. Tools

For each of the 6 they presented examples of what we see with most modern software. Under Communication, for example, they listed what they call as a "three part tuple:"

1. Transport
TCP, UDP, other network protocols
2. Exchange
Data can be exchanged using request/response, asynchronous, fire and forget, amongst others
3. Format
XML, Text, JSON, Binary, and so on.

Presentation has to do with the interface for the user. Some examples given here were Console, Graphical Interface, and Markup.

The State Model to apply in the application in pragmatic language is either durable (available always) or transient (available only certain times). Another consideration for State Management is the "shape" of the data: objects, relational, heirarchical, etc... Finally when considering the State Model there was the notion of where the data was: client, server, elsewhere...

Processing used in the application, some examples of implementation they gave were: Procedural, Imperative, Concurrent, Parallel, and Transactional. On the processing side, however, some higher level descriptions of approaches included: Divide and Conquer, Recursive, Event based, Shared Queue, amongst others.

Resource Management had to do with persistence of data and configuration. Some styles discussed for this was locator / registry, discovery, injected, and a very sophisticated term for "hard coding" called a priori.

Tools to consider in being pragmatic could be the programming language, development environment, frameworks, code generation, amongst others.

I'm not sure if this stuff originated with Cathi and Ted, but it's very useful in coming up with a high level picture of how software will work. Although most of what I've worked with recently fits under the umbrella of "web application" it would be very useful to approach new projects with this catalog and the set of approaches beneath each item in mind.

This entire session is available as a webcast, and I'd recommend it to anyone who is involved in software development, even if they don't think of themselves as architects. There is always a point when I'm reading or listening to high level stuff like this where I begin to wonder if it really matters but when it is kept concrete like this the distinction between architecture as high level thinking that has to be done anyway, and architecture as romantic "rah rah" for the higher ups is very clear.

Ted's blog seems fairly alive but Cathi's blog has not been updated for a while.

}

LINQ, deeper

{

My Tuesday morning session was with Luca Bolognese, lead program manager on LINQ, covering a slightly different flavor than what Pablo Castro was demonstrating with ADO.NET vNext on Monday morning. Luca’s Italian accent was charming, but it was even more exciting to see what LINQ does on a deeper level.

He started by showing a classical problem that LINQ solves. Imagine a collection of integers which you need to filter out based on a certain criteria, something like:

int[] nums = {2,3,5,7,11};

It’s usually done with some kind of iteration in the form of:

int[] nums = { 2,3,5,7,11};

List<int> selected = new List<int>();

for (int i = 0; i <>

if (nums[i] > 4) {

selected.Add(nums[i]);

}

}

// selected represents my values

I write code like that on a daily basis, it seems. In LINQ, queries become a “first class” member of the language and operate on anything that implements the IEnumerable interface. A problem like the above can be solved without line noise and iteration:

int[] nums = {2,3,5,7,11};

var selected = from i in nums

where i > 4

select i;

From the perspective of someone who writes the iteration + selection stuff every day, I think that LINQ is going to have a big impact on my code. But this is just the beginning. Luca then showed how LINQ is split out to target several different forms of data structures:

  1. LINQ to Objects (Anything implementing IEnumerable, collections etc… )
  2. LINQ enabled ADO.NET (mapping database structures to objects and then querying off of them)
  3. LINQ to XML

The first example above is a typical LINQ to Objects scenario where filtering, querying, or manipulation can be applied to a collection that supports IEnumerable. Being able to do this sort of manipulation on HashTables, DictionaryEntries, and so on is the type of power that I demonstrated above.

In a LINQ enabled ADO.NET scenario, you can take an object that is mapped to some entity in your database and query off of it. I don’t recall the exact demonstration, but an example would be querying a collection of “customer” objects based on some filter:

var topcustomers = from customer in customers

where customer.orderCount > 10

select {customer.id, customer.fullName};

I’ve heard a lot about ORM ever since my friend Aaron started talking about Hibernate (some time ago) but what catapults this over the typical ORM scenario is that the functionality is built into language and not abstracted through method calls. I’ve been a bit lukewarm to the notion since it seemed like a lot of work up front and because this type of abstraction where objects can represent data and provide some manipulative capacity is already built in the DataSet/DataTable model. However two things make LINQ enabled ADO.NET a little different: first, the IDE facilitates the mappings that are made through an “EDM” diagram and mapping providers that give a lot of flexibility in connecting to the datastore, secondly the developer wields a lot of control over how the mapping works and how things are updated – the queries, the stored procedures, all data access still belongs to you and not to some “tool.”

Finally the LINQ enabled XML is powerful – it exposes XML data to yet another format of “query” besides XQuery and XPath. There is also the ability to work with the hierarchical structure that XML documents sometimes expose as collections within collections. The LINQ processing, with a bit of instruction, can also be instructed how to retrieve content (depth first traversal, and so on).

Luca’s blog seems to have died, but for some interesting tidbits, it’s here. Maybe with a few hits it can start up again?

}

Wednesday, June 14, 2006

Python and Ruby with .NET

{

An amazing session with Mahesh Prakriya and John Lam came on Monday night. Mahesh was covering the IronPython efforts for .NET and John came to talk about Ruby. It was refreshing to see the smaller number of developers practically using .NET but for whom language mattered.

Mahesh gave a history of Python support in the CLR and made a case for dynamic languages within .NET. His examples were pretty cool, especially when he started with a few examples of Python:

2 + 2 (room is quiet)
22**64 (room is quiet - David is amazed at the response in milliseconds)

He also showed a powerful implementation of a Matlab / Mathematica type tool written in Python for .NET.

John's part of the talk was also very good - he has written a bridge between Ruby and the CLR. John was fairly funny and expressive and his demos of Ruby and the CLR were fairly good in giving a picture of what he was tring to do.

John's blog is here, Mahesh doesn't blog but a search on Google shows him in a few places. I'm going to be downloading Iron Python and RubyCLR in the next month or so. Not completely sure, but it seems that here is where one can find RubyCLR.

}

ADO.NET vNext at TechEd

{

One of the best talks of this week was Pablo Castro's Monday session covering some of the new features of ADO.NET. Having used some form of data access library from Microsoft for a while, the changes he showed were fairly dramatic, and they will really impact on a fundamental level how programmers think about data access when it becomes widely available.

The general direction from Microsoft seems to be giving more modelling capabilities to developers to manage in an application space. The original step in this direction was ADO.NET from which we can model on the application level using a DataSet and it's associated classes. But the features in ADO.NET vNext - and I hesitate on superlatives - are an unparalleled shift (in my short life as a developer) in what we do for data access.

The rational behind my claim of this being as big a change is that it is no longer limited to the object model or some IDE-fu that Microsoft implements in Visual Studio. This change is a part of the language with which we express ourselves when talking to the database. My rational is LINQ.

LINQ is simply understood by unpacking the acronym: language integrated query. From a syntactical point of view in ADO.NET we can obtain a reference to the database and retrieve a collection of objects that represent what's inside.

I think this is a huge revelation - something that goes well beyond even the Object Relational stuff I hear about from time to time simply because it's not just objects, it's syntax.

Around LINQ as a syntactical approach to getting data from the database is the ability to create what is now called an EDM diagram which is essentially a mapping of the database to the class types that will be used to represent the internals. The EDM diagram can be manipulated so that you can do trivial things such as changing a column name reference in the object versus what's in the database, to complex things such as consolidating content from tables in order to avoid SQL JOIN expressions.

On keynote night the topic of Microsoft "innovation" came up and based on what was delivered, it didn't seem as though there was much in the way of newness to Microsoft's portfolio. But LINQ in all it's forms - for xml, for databases, and for that which implements IEnumerable, this is an innovation indeed.

I'm definitely staying tuned to the LINQ project and the Microsoft Data Access blog.

}

Tuesday, June 13, 2006

Rob Relyea on WPF

{

The timelag on posts is due to several 4 hour nights in succession while partaking in the festivities at TechEd. Here is a quick recap of Monday.

First session was WPF presented by Rob Reylea. WPF seems to be coming together more and more; Rob demonstrated new controls and old controls with new features: the listbox is quite a standout – textboxes that support spellchecking, controls having the ability to embed any other controls (button in a button). Another new feature is in the ability to do screen layout: flow layout, grid layout, and others besides the traditional absolute layout most WinForms developers are used to. Databinding in WPF is also quite improved, and learned from the web world: there is a repeater type approach to it now possible as well as functionality in the new listbox control. Vectors, fonts – there was a lot to the presentation. Rob has a blog, so it’s something to watch as the WPF things consolidate more.

After the session was finished I asked Rob whether they were watching other projects like Mozilla’s XUL/XPFE stuff since, in my own thinking, they were similar in approach. He was a bit dismissive here in the sense that he didn’t see why WPF and XPFE should be compared. Since we didn’t have the same assumptions, I withdrew the question although here is my own thinking:

  1. XAML is like XUL, both are a special markup for describing presentation.
  2. XPCOM provides hooks into the operating system functionality, making the client thick just the way that Windows Vista is going to have a special understanding and capability with XAML and the code associated with WPF applications.

There are some big differences for sure: XPFE uses javascript and CSS, WPF does not. XPFE relies on RDF, and I’m not sure that they’ll be any configuration involved in WPF applications since the runtime should handle much of that for the developer. I am not sure about this. Another set of differences are the vectors and rendering of WPF - XPFE does not, to my knowledge, have an equivalent. However, the basis for my assumption was that they both attempt at solving similar problems - extend the web aware application to the desktop to give it a more "client app" functionality and feel. Leverage better controls and have more flexibility than HTML/DHTML and so on. And yes, this is from the perspective of WPF/e - the use in web based applications.

There is much to learn, and much to be seen of WPF, I’ll stay tuned to Rob’s and other blogs that follow the development.

}

TechEd Keynote

{

I’d been looking forward to the keynote for a while since I found out that Ray Ozzie was going to be speaking. Unfortunately his piece of things was quite small – I imagine I wasn’t the only person disappointed by this. The meaning of the keynote was the general direction and strategy of Microsoft and how we “IT people” fit into that.

It made me think a lot of Macintosh ads and how differentiated the corporate strategies of Microsoft and Apple differ. Microsoft has taken the rather difficult (and not necessarily sexy (in a geek sense)) undertaking of “people-centric” software for business. The idea is to make people more productive in an organizational setting with software.

This reminds me of Paul Graham’s evocation of young people to pick hard problems to work on. The word “hard” is subject to semantic lashing but in this case, the idea of making people work better together is a very difficult problem, especially because it’s difficult to do without reinventing the wheel and causing new problems with current fixes.

Although it’s not explicitly stated, it seems that Microsoft’s approach here is not to “innovate” necessarily, but to commoditize new technologies that are proving themselves and package them for masses at all levels: masses in systems administration, masses in development, and, of course, the “mass” of society.

One other thing about Ozzie’s keynote: there is now a conversation about “smart client” applications which represent something that I began to suspect when I first installed Google Earth: the possibility of leveraging a thick client that was intensely web aware. Of course this notion has been around for a long time – yes, I do play MMOs on occasion – but now a technology like that could possibly be commoditized by Microsoft moving developers away from the traditional web based application to a more controlled, powerful “experience” the developer can produce. Web applications still have their place but essentially the XAML WPF/e notion. Kind of cool in the sense that fewer “fix the back button” types of issues in an internal web based application are possible but also a bit sinister since this is a big move away from “standards” to the proprietary, Windows choke-hold environment.

Lots of other thoughts, and some larger context as TechEd gets underway.

}

Thursday, June 08, 2006

Thinking Atlas

{

One thing I'm very interested in hearing about at TechEd is going to be the Microsoft Atlas framework. Ajax is haute couture and Atlas seems to be at the center of Microsoft's strategy. On my current project I took a look at using Atlas but never really "got" the benefits of the technology. The declarative side of things seems to be designed for the IDE, and without the IDE here it's a bit of a non-intuitive hassle to hand code things out. The second thing that made me leary was the direction Atlas took towards web services. The tasks I needed to perform were very simple and I shuddered when I thought about how much overhead there would be, say, for a webservice whose only job was to send me a filtered list of things for a dropdown. In the end I used my own XmlHttp and a bit of POX (I love that acronym, "Plain Old XML") to send things back and forth. Also courtesy of the O'Reilly Ajax Hacks book I got a handle on JSON and how awesome it is when it comes to sending things from the server and handling it with client side javascript.

Today I listened to Ken Alstad interviewed on .NET Rocks concerning Atlas and it confirmed my suspicions on Atlas leveraging web services and how easy it was to "bind" data from the server using javascript.

It's interesting that there seem to be two directions here - looking on the open source side of things, Prototype seems to be generating quite a buzz. I ran into Prototype in December courtesy of 24 ways and it took about 35 minutes to really "get" it. Prototype is getting mixed in with a lot of JSON and other javascript libraries and techniques for some pretty cool stuff like script.aculo.us.

On the Microsoft side we're waiting for Atlas to become standardized, complete, (marketable?). It's declarative, more IDE friendly it seems, and at least for me, less intuitive.

Anyway, part of the problem might be my ability to understand it and see where I can leverage it so I'm going to be in Jeff Prosise's session on Atlas at TechEd. A few questions I'm thinking of are as follows:

1. How to control when data is bound, doing filtering and other operations using javsacript.
2. Will Atlas play well with JSON? Not JSON being shoehorned into some Atlas app, but more like the ability to return JSON style strings from the server.
3. What sort of IDE interface can we expect with Atlas's release? Can we expect any improvements to javascript editing in Visual Studio as a result?

... I'm sure there are more, and I'm sure that after the session I'll have a much clearer picture of Atlas and it's future in web apps for Microsoft developers. I am pretty sure we made the right choice on the current project, but a year from now I could be singing praises for Atlas.

}

Tuesday, June 06, 2006

Code as Design: Implications for Architects

{

I've just got to Jack W. Reeves thoughts on Code as Design in his seminal article originally in the now defunct C++ Journal, and now republished on developer.*. I really liked his thoughts and although I can't necessarily say it's been something intuitively sitting in the back of my mind, its implications are definitely something I've thought for a very long time.

The basic premise of his article is stated early: final source code is the real software design. From the source code design specification, a compiler and linker do the manufacturing work to implement the software from what one could term the design document of source code. He continues to discuss some implications of this that are well thought out and, from my experience, very true.

A great thing about the article is the return to the notion of engineering and software development being treated together. These days I see a lot of writing about how the words software and engineering could never go together, but as Reeves shows, there is a very real relationship between disciplines especially if we think of our code as a design document.

But the real implication for me, beyond the ones stated by Reeves, has to do with the title "Software Architect" - which I happen to hold (not overly meaningful, by the way - everyone at my company is a "Software Architect"). People with the title, who wield Visio, Use Cases, and management-speak exclusively, aren't the real architects of the system. Of course these tools are very useful, and of course general direction is always necessary for a project, but for the notion of a real "Software Architect" one has to stay close to the code - close to the actual design document - to really shape the manufacturing process. Although staying close doesn't mean, to me, writing every line of code oneself, it means understanding and being a part of the process of making it.

}

Monday, June 05, 2006

TechEd 2006

{

I'm very, very excited to be going to Microsoft's TechEd conference this year with my boss. Now I'm counting down the days - we leave for Boston on June 10, and the conference runs from June 11 - 16. The first conference I was able to talk my way into going to was VBITS 2000, and as I think back upon my years training and consulting for my previous employer, that was among the best.

I got to rub shoulders with some pretty amazing people, and learned many valuable lessons. I'll never forget sheepishly asking Aaron Skonnard what he thought of XML Data Islands in IE 5, or Bill Vaughn's theatrics (and knowledge) of data access.

I think my approach and goals at the conference may be a little different from some people who are looking forward to a week off to fraternize and network. I'm hoping to learn as much as possible since I'm not sure when next I'll get the opportunity to go to a conference on my employer's expense.

The biggest trick is going to be separating the Fire & Motion from the sessions that will give me knowledge and direction for the work I do on a daily basis. Of course it's good to be aware of what's upcoming, but I've always been really reluctant to spend much of an investment in something that is subject to change. Beyond that I usually prefer to wait when adopting something for a little while (let others be the bug testers) so that also pits me against things that are a bit too futuristic.

With that said, here is my schedule for the week. There are a few things I'd like to see this year (list is totally subject to change, like, on a minute by minute basis):

1. More ASP.NET 2.0
Deeper on the new features, Personalization API, Web Parts, Internals

2. XML / XQuery / LINQ
Some ideas on where these are in the pipeline so I can start using them on projects

3. Workflow
I have DVDs of the last TechEd, and this came up a bit. I'm interested again on seeing an angle to take on this tool.

4. Sharepoint and Biztalk
I signed up for a few sessions but this is the part where I get leary; I'm not sure when the 2007 versions will be out and about. I know there are many, many opportunities to use the current versions of these products and would probably benefit from seeing that kind of stuff right away. But I'm hoping to glean and perhaps get a few tidbits to stay ahead of the curve when the new versions are released.

5. Team System
We are using it in house, so I need to understand it - I also want to get an accurate picture of how Team System can be leveraged by small teams. Most descriptions (and the pricing!) seems to put it in the "enterprise" category but most of what I've worked on in the past, and probably for the forseeable future will be software projects with less than 20 people involved.

6. IIS 7
I use IIS every day, and it's been like an old friend since the early versions. I'd like to see where the product is headed, and how its path meets with ASP.NET 2.0 and all the other pieces of web application development.

7. Architecture Talks
I signed up for a few that will hopefully be interesting and useful. I'd also like to get a better handle on SOA, "contracts," and the like. Distributed architecture makes a lot of sense to me, but I'm a bit wary of the architecture astronauts when it comes to enterprise application architectures - I try to translate that stuff into what I do on a daily basis (enterprise app for a publicly traded company of 5000+ employees) and still sometimes miss the mark. Perhaps I'll get a chance to meet some people in this environment as well who I can maintain contact with.

I'm hoping that I'll be able to change sessions as well during the conference. I'm fairly pleased with what I have, but the flexibility would be nice.

As the week progresses I'll hopefully write down some things I'll be hoping to answer, as well as some pre-conference reading.

TechEd 2006, here I come!

}

Dino says...

{

I think it was a bit of a rub off from me liking Francesco Balena after being very impressed with him at my first tech conference, VBITS 2000 in San Francisco, but Dino Esposito is another of those intensly smart Italian programmers living right on the edge of that Microsoft technology curve.

Anyhow, Dino, in response to lots of developers like myself, wanting to get better wrote the following:

"Some people here at the conference asked what I feel to recommend to do NOW. Pretty easy job.

I do recommend to learn as much as possible about ASP.NET 2.0 internals. For example: handlers, modules, providers, lifecycle, script-oriented API around the Page class. To learn as much as possible about best practices for control development.

In this regard, I wrote quite a few articles for the ASP.NET DevCenter in the past months to form a sort of crashcourse on control development. "
That's some great advice and it exposes a few personal areas that could see growth.

}

Friday, June 02, 2006

How good are you?

{

In my teaching years I'd often run into people who thought they understood things that they really didn't. Today, spreading like a fire I found a series of "do you know" posts that measured levels of understanding in three technologies most people think they understand: HTML, Javascript, and CSS.

Of course a personal evaluation leaves one open to questions but I think I'm doing okay level-wise. I wasn't at the top in any of the posts, but just a notch below.

When it comes to technologies, even those that I use on a daily basis (like the above) I'm always loathe to say I have a maximum level of knowledge because I am well aware of those who sit on the top of the food chain in whatever areas of knowlege it might be: I'm no Eric Meyer when it comes to CSS, I'm no Thomas Fuchs when it comes to javascript, and although I know much of what there is to know about HTML, I haven't spent (what I consider to be a waste of) time navigating differences in rendering on different browsers and standards support unless it's been a big problem on a project.

Well, here's to all of us who are still learning...

}