SourceGrid 4.22 released

December 12, 2009

SourceGrid 4.22 has been released for a while.

The most interesting changes are these:

  • QuadTree implementation for spanning cells. This improves performance 2 to 5 times when working with spanned cells. I used simple library found on codeproject.
  • Source code repository was moved away from CodePlex to BitBucket . New repository is here.

The reason for doing this is terrible support svn at CodePlex. Read more in another post. BitBucket is a site inspired by GitHub and provides Mercurial DCVS support. Hopefully this will encourage other people to experiment with this project.

Advertisements

Warning! Long post ahead, go straight to bottom if you want to debug ASP.Net applications with #D

How I started programming ASP.Net

Recently i’ve changed jobs and thus I changed my profile from .Net dekstop apps to ASP.Net web site programming. (Yes, from now on there will be more Internet ;)) Along with new job, I got also new tools:

  • VisualStudio + Resharper
  • Microsoft SQL Server Management studio
  • Team Foundation Server for source control
  • ASP.Net Web site development

Coming from a completely different environment, where, for the past 4 years, i worked with SharpDevelop, Windows forms, Boo, OpenOffice, FlameRobin + FireBird, Svn, nant, I must say that for a few weeks and still now I am feeling like a baby born in a completely new world.

My experience so far has been not that much exciting, though. Here are the reasons:

VisualStudio + Resharper

Good points:

* Resharper gives more refactoring options.  The ones I used already: convert property to automatic property, use a base parameter in a method instead of concrete implementation, introduce field, extract method.

*Automatic error checking is a nice feature.

And finally i got a change to work with .Net 3.5 !  (of course it has nothing to do with #D or VS)

Quirks and nuisances:

  • Ccode formatting is tragic.

It formats not only the indentation, but also line brakes, which is very annoying. Imagine i want to make a few spaces in aspx file after a div element. Nope, visual studio will remove those spaces. When i went to options and tried to tell VS to not mess around with line breaks, formatting tools stopped working at all complaining about bad configuration.

  • It is sooo damn slow!!!!

I do not know why, but even to save a file it took around two seconds. To build a already up-to-date project took around 20 seconds, while on #D it takes only 5 seconds!

To even start-up visual studio takes a minute. Compare this to #D 10 seconds, where in 4 seconds #D is ready to run and you can start typing. Parsing is finished in the last 6 seconds, which is very fast compared to VS.

  • It does not have a decent a decent “go to” function, which is #D.

I found though go to file and go to type, but when i type a file name, the search is made only for the end of file, not for begining and end of file. That is, if you have a file named “myCompany” and you type “company” both tools will show you nothing.

Lastly, in #D you can enter digits and those will be understood as line numbers, while on VisualStudio you have to use a “go to line number” function. Why three functions instead of one!? It jus makes no sense.

  • “Go to declaration” strange behaviour

Go to declaration in visual studio and go to parent class are two different functions in VisualStudio. If i press ctrl-enter on a variable of type MyClass in #D, it will go to the place where that variable was declared. If i press again ctrl-enter on a type of my variable, it will take me to the implementation of the type MyClass. Nice and painless, but not in VS, where these two functions are separate.

  • VS does not allow to edit source code, while compilation is in progress. Or at least, it complains that everything will go very bad if i do so

I am not sure why VS complains about it, but it appears that #D is doing that with no problems.  Of course i know that if i change a code during the compilation, and i will save it, compiler might report errors if i edit code in incorrect way. Afterall, i am a programmer, so i know the implications of what i am doing.

Microsoft SQL Server Management studio 2008

  • Microsoft SQL server management studio lacks intelissence, which is very suprising.

With FlameRobin you have intellisense for SQL keywords, for tables, for table columns.  With Flamerobin you can copy results “as insert statements” or “as update statements”. This helps a lot and saves a lot of typing.

  • AFAIK, by default all scripts are executed without trasactions

So if you play with your queries, it is your own responsibility to not mess everything up. Of course, if you run a script a few pages long, half of it executes, and half of it not.

Team Foundation Server for source control

TFS source control is a very very strange piece of software indeed. Imagine this: when you checkout all source from the server, all files are initialy with ReadOnly attribute. Yes, that’s right, that is how TFS distinguishes which files have modifications, and which do not. Now, if you remove the attribute and try to edit the file with another editor, TFS will not understand what is happening. The reason is that when you edit a file from VS, TFS checkouts the file for editing, that is, removes the readonly attribute and contacts TFS server to mark that file as being edited. Furthermore, AFAIK this scheme does not allow for two people to edit the same file at the same time.

Check in dialog is also somewhat not useful: it does not remember previous checking message. Merge is done also in an interesting way. Instead of merging automatically, it asks whether the user wants to automatically or manually merge the file. I know of situations when i would like to manually merge files (for example css files, csproj files, and sln files), but for any other files i would like automatic merging to occur immediatelly! Why bother me with even more clicks.

I could not find a “blame” command in TFS.

When compiling the project, the compiler, as we all know, produces files into obj\ and bin\ directories. However, when this is happening, TFS is checking if those items are inside TFS server and every times reports that no files in the server where found. This happens even when obj and bin folders are set to be ignored.

ASP.Net Web site development

Web site development deserves also it’s own point. Note that there there is ASP.Net website and ASP.Net application. The difference in the two is that with web site there is no csproj files, and if you want to compile your site, the compiler does not know which files to compile! So what it does then? The compiler then goes recursively into each directory and builds each directory separately. Image that this operation is very slow. It is so slow, that completely rebuilding a website took around 5 minutes, of which 4 minutes probably went to website compilation.

References are managed also very interestingly. When you click “Add reference” in ASP.Net website and choose an assembly, a file named <YourAssemblyName>.refresh is created in the bin directory. Yes, that means that this .refresh file has to be inside your source control! And that means that part of the bin folder must be in your repository. Now what is inside that .refresh file?  Simply a relative path to the original library file! For example, in a file “YourApp.WebSite\Bin\log4net.dll.refresh” file you would find this text: “..\Libraries\Log4Net\log4net.dll”

How to debug ASP.Net applications with SharpDevelop

Luckily, the situation changed quite soon. The following happened:

  • TFS was switched to SVN. Hurray!
  • WebSite was changed to WebApplication.
  • Having this changes, i could successfully load our web application solution with SharpDevelop 3.x and compile it.

Of course, #D does not support debugging web applications out of the box. I tried to attach to w3wp.exe process (a process associated with application pool in ISS), but sadly my system is WindowsXP 64, so w3wp.exe process is a native x64 app, and thus it does not allow to attach to it. Debugging x64 applications is not yet supported in #D. But there is a solution to use your own native web server, and there is such – a Cassini web server.  You can find it here in dmitryr’s blog.

Here are the steps to set everything up

  • Compile Cassini server for x86 architecture, so that #D could debug it.  #D does not yet support debugging x64 native applications
  • In your web applications csproj file in Debug tab set the following arguments
  • Start external program must point to your Cassini web server
  • First argument is path to your web application directory
  • second argument is the port
  • Third argument virtual path, such as ‘/’

Now you should set a break point and point your browser to your app, the breakpoint should be hit.

You can even execute cassini as a stand-alone application, or even as a windows service, and attach sharp develop to that process.

What about Soft Deletes?

September 7, 2009

There is a topic going now in the blogging area, and it is about soft deletes. Soft delete is all about setting a flag for a record. After the flag is set, the record should be treated as deleted, and users should never see it.  We all have heard these requirements, so Ayende and Udi Dahan shared their own perspective on this topic.

Now before you go reading these posts, note that i intentionally made word “never” italic. This is with reason, because this word is a special word and must be treated separately. How many of you have heard this:

  • We will ever have only these two modules
  • We will only be able to perform this operation this way
  • Our application will not be (ever) used for anything except sharing documents
  • After invoking this operation, the data will be lost, and nobody will want to use it
  • .. etc.

As you may have guest, there is no such thing our planet Earth as “never”, “always”, etc. These words carry a special meaning, since they state that the topic they are used for will be a constant for ever. But what is constant in our life? Nothing is constant, actually. Governments change, people are being born, grow old and die, traditions come and go. So when you hear telling that some specific software requirement will not change, simply do not believe. I do not believe, and it proves always to be correct position to hold up to.

So when talking about soft deletes be sure that users will want to use the deleted information sooner or later. So one or another way information must be preserved. So when you will read Udi Dahan post, he will tell you exactly the same, do not delete information, because it has a lot of meaning in business context. Now Ayende writes that putting a flag to simulate delete operation is not a fancy idea, since it means that your data model is changing over time. That is, it is not append only mode. And append only mode makes a system immutable, which has it’s own benefits. So what are the benefits of immutability. Google can help you with that, but i will compile my own list here:

  • Classic recipe for race conditions – send between processes pointers to mutable objects, while trying to ensure synchronized access to shared memory.
  • In Erlang (which has immutable objects), processes have separate heaps that are GC’d separatele, so GC freezes happend much more rarely
  • There is a lot of interesting links here immutability-in-c.

The list is not that big though, but the benefits are worth to have. What i’ve found personally is that having more immutable objects makes your life easier, because you know that your object can not be modified in any way that you don’t expect. For example, a call to a function can not change your object, since it is immutable. The only way to change the object is to call a function and explicitly get a new object returned from that function.

So to sum up: Ayende advocates immutability (because of very nice benefits), while Udi Dahan advocates business interests.

Everythings boils down to the fact that soft deletes as they are are not the best solution. They are usefull because of business requirements, but can and must be improved .

Ayende – Avoid Soft Deletes

Udi Dahan – Just don’t delete

Ayende – Soft deletes aren’t append only model

Multimethods in .Net

July 23, 2009

Today i was browsing through the wikipedia, and explored various programming concepts that i am still not even heared of.

One of those is multimethods, or multiple dispatch. Multiethod is a method, which is invoked dynamically based on it’s arguments. AFAIK, what I mostly use is inheritance, which forces to dynamically select the required method at runtime based on the object type. However, arguments are not involved in this process, they are checked at compile time.

Multimethods are also compared to a Visitor pattern. While this pattern achives the same result, it carries it’s own dissadvantages. Here is a nice explanation of Visitor vs MultiMethods

What is most interesting, is that .Net does not support these out of the box, but there is a library which does just that. Take a look at MultiMethods.Net

There is an interesting “debate” going in Ayende’s reply about “Analyzing NH 2.1 Changes”. The biggest point of this discussion, IMO, is this statemen made by Patrick Smacchia:

(NHibernate) code base is completely entangled and it must be a daily pain to maintain the code”

This is really a strong argument, which must be backed by a lot of facts. Apparently, in the long discussion a lot facts and different view points all got mixed up. So I will try to highlight some of the more interesting points that caught my mind. So the story goes like this

The statement about NHibernate entanglement is supported by the fact, that a lot of namespaces are using each other. This is very well  illustrated by this image. Both Mark and Patrick agree that namespaces are the right thing to measure coupling.

> Mark Lindell asks Ayende: Are you claiming that using either namespaces or assembly dependencies is not valid for the NHibernate code base?

> Patrick: On this I follow Mark Lindell: If you don’t use namespaces to componentized and create layer inside your unique assembly then which other concrete artifact are you relying on?

Ayende responds:

-> Ayende: Analyzing namespace dependencies as a way to measure coupling isn’t going to work for NH, no.

> Ayende: (NHibernate) is highly componentized, and we are actually working on making this even more so, by adding additional extension points (like the pluggable proxies, object factories, and more).

Patrick then argues, that NHibernbate is not componentized from the inside (as seen by NHibernate developers, that is). I believe this means that Patrick agreed, that NHibernate.dll can be treated as single component from the user perspective, as he did not object it. If we look at NH as a component, then this is the standard stuff that is needed to use NHibernate:  NHibernate.Cfg.Configuration, ISessionFactory, ISession.

Patrick argues, that it would be difficult for a newcomer to start working on NH:

*Patrick Smacchia: But you say it is highly componentized from the user point-of –view, not from the NHibernate insider point-of –view. Think of the new developers coming on NHibernate. How can he understand what depends on what and why?

Ayende’s answer was that a new person has joined NH last year and brought a new feature, that is a ANTLR HQL parser, which resulted at least single nice feature, such as bulk actions.  Another argument was a number of patches sent by users. Those patches are relevant and applied.

Patrick then goes to say, that you have to somehow grasp the intentions of the former developers. He does not point how to do that, however

> Patrick: Because if one cannot recover the mental state of the former developer, then one is about to miss former developer intentions. If one doesn’t understand former intentions, one cannot do better code than it was.

Ayende gives two ways how this is working in NH.

> Ayende: If I can’t understand what is going on from the code, it is a problem in itself, period.

> It is actually _really_ hard to break NHibernate, we got pretty much the entire thing covered every which way.

So basically I see there two assumptions, each held by the other side:

  • One assumption is that “each namespaces depends directly or indirectly on all other namespaces”. Which constitutes “must be a daily pain to maintain the code. There is a total lack of componentization and each modification might potentially affect the whole code base.”
  • And other assumption is – NH does not use namespaces as way to componentize the system. NH namespace dependancy does not make the point, that it is a pain to maintain the code. However, huge unit test coverage makes it difficult to breake existing features and it is imperative that a programmer understands the code and is able to make it better (even with namespaces beiong not used to componentize the system)

I believe the reader can see that both parties have very good points, but one or another just don’t want to embrace the fact, that what works in one place, does not work in another. That goes even for all the rules made or discovered by human. While Newton mechanics works on Earth, they do not necessarily work with bigger or smaller scales.

Everything is dependent on the context, though, even Frans Bouma would like to disagree with me. While one great practice might work in one place, in another it might just fail.  There is no “one size fits it all”, and there is no silver bullet. So I would disaggre with Frans Bouma and would say that Principles aret bound to context.

One final thing about understanding code: Boxes and arrows help to get an abstract view of a system, especially when you need to get a first glance and see the big picture. However, it does not help when you try to understand how one box interacts via some dotted arrow with another arrow.  Unfortunately, many of the things are in the details (though not completely all, of course). The main point here is that a developer must have a very strong thinking capability. He / she has to be able to keep in mind all the boxes and arrows mentally. If a developer can not develop a mental model easily and navigate it easily by simply looking at the code, then he will be in a difficult position. I would go even up to the point saying that ability to relate code to diagrams, ability to relate code to business requirements, or code to mental model is a must. This is the point that distinguishes one developer from another. To sum up, namespaces  dependency do not necessarily show code entanglement, but i believe this information can and must be used to evaluate the code based on the given context

There are many new features, read the full list here

Here is the official post. Download and enjoy bulk actions and dependency injections :)

Yesterday I was upgrading NHibernate to a 2.1.x branch, which will be the next release in the comming months.

The upgrade process was not that extremely difficult, but it was really complex and rather long, as i am using Castle project and NHibernate.Generics

So the overal process was like this:

  1. Checkout Castle trunk, and build it.
  2. Update Castle.DynamicProxy2 and Castle.Core references in NHibernate
  3. build NHibernate
  4. Checkout NHContrib project
  5. Build NHibernate.Search project, which is part of NHContrib. Don’t forget to update references.
  6. Build NHibernate.Linq project, which is also part of NHContrib. For this project i had to use SharpDevelop, since it does not support nant. But everything works nice and clean as usual :)
  7. Update references in Castle project. These were in \SharedLibs folder. And NHibernate.Linq reference was in \SharedLibs\net folder
  8. Successfully build everything with nant
  9. Build NHibernate.Generics

Note that for NHibernate and Castle you have to compile against 3.5 paltform, but the generated assemblies can run on .Net 2.0.   This is done by passing an argument -t:net-3.5  to nant.

So after building everything, had to add proxyfactory.factory_class configuration variable, as described here. Pay  close attention to the fact, that a class assembly is uniquely identified by both it’s name, and it’s assembly name. Since i am using castle, i had to add this line:

property name="proxyfactory.factory_class">NHibernate.ByteCode.Castle.ProxyFactoryFactory, NHibernate.ByteCode.Castle</property>

That’s it.  Now i have bulk actions capability and “group by” by IProjection :)

Today I got a strange error.

Error CS0518: Predefined type ‘System.Object’ is not defined or imported

Strangely, all nant related tasks just work as accepted.

After a little googling i’ve found a solution here, that says that i need to reference mscorlib. I did that and the error was gone.

Then I opened my project preferences and saw that a checkbox was check, which told not to reference mscorlib. After unchecking the checkbox, the day was saved.

do not reference mscorlib

The screenshot shows preferences in SharpDevelop 3.1

How would you go about implementing this kind of functionality:

After changing a domain property, a system function is called. If it triggers another property, other system functions are called, if there are any.

A domain property is just any property, that has a business meaning, for example a person has a Name and Surname, a task object has a collection of ResponsibleUsers, and Status, and FinishDate, etc.

A system function is a custom business logic, that can be implemented either by a programmer, compiled and loaded during system start up, or it might be defined via “Administration” menu in your application.  In any case, this system function is registered for a concrete business type, and some exact property.

An example might look like this:

[OnPropertyChanged(typeof(Task), “Status”)]

public void OnStatusChange(Task task)

{

if (task.Status.Name.Equals(“Finished”))

task.FinishDate = DateTime.Now;

}

[OnPropertyChanged(typeof(Task), “FinishDate”)]

public void OnFinishDateChange(Task task)

{

// do something with FinishDate property

}
Manually implementing INotifyPropertyChange for each of our domain object is not a solution. What i would go with is an automatic solution.

One nice source of ideas is FabioMaulio and José F. Romaniello blogs.

Here are the blog posts that are very very interesting:

All these posts together tell how to dynamically inject necessary functionality into your POCO via interceptors, and how to wire everything with NHibernate.

As a side result, it also tells how to programme even without POCO, that is only with interfaces of your domain entities.

Finally, i managed to find how to do grouping with NHibernate. Basically, what you need to know is that grouping and aggreagte functions are all made with IProjection.  A quick example would be:

return session.CreateCriteria(typeof(HumanTask))
.CreateCriteria(“Human”, “responsibleuser”)
.SetProjection(Projections.ProjectionList()
.Add(Projections.RowCount())
.Add(Property.ForName(“responsibleuser.FullName”))
.Add(Property.ForName(“responsibleuser.FullName”).Group())
);

What this does is: group tasks assigned to human, and returned the count and name for each group.  A nice way to understand how to use some features of NH is to read it’s code. I found this by looking at NHibernate.Test.Criteria.CriteriaQueryTest class. There is a nice method which creates a lot of criterias. that method is CloningProjectionsTest, have a look at it.