Simple numbered markers with Leaflet.js

I thought I’d look up how to create markers that are numbered with the excellent Leaflet.js. Perhaps surprisingly there is no built in method, but looking round the Web there are a few suggested ways of doing it. These all seemed rather over-engineered though so I decided to engage brain rather than just blindly copying. The following method works, and is a lot less complicated than anything else I’ve seen out there.

I wanted to achieve this (yes I know grey is not a good choice in this case, but it is what the user wanted):
map

To get markers like this first of all copy the Leaflet.js default marker image, which in 0.7.3 is here -> http://cdn.leafletjs.com/leaflet-0.7.3/images/marker-icon.png . I simply modified this using Paint.Net to be solid grey. Put the final image into your website/application.

Create a CSS class like this, obviously substituting your image path/name and font colour:

.number-icon
{
	background-image: url("images/number-marker-icon.png");
	text-align:center;
	color:White;	
}

Then when creating your marker you need to add this code, putting the number in the ‘html’ parameter:

      var numberIcon = L.divIcon({
            className: "number-icon",
            iconSize: [25, 41],
            iconAnchor: [10, 44],
            popupAnchor: [3, -40],
            html: variable_containing_the_number        
      });

      var marker = new L.marker([lat, long],
                            {
                                
                                icon: numberIcon
                                
                            });

And that’s all there is to it. Not quite sure why there are so many other elaborate methods out there, but maybe they work for use cases other than the one I had too.

Advertisements

Fast Correlation-Based Filter in C#: Part 2

In a previous post I started this article about Fast Correlation Based Filter (FCBF). That was quite long, setting up the algorithms used to calculate symmetrical uncertainty (SU) that is the ‘decision’ engine behind FCBF.

Now we have to actually use SU on our entire dataset, to discover which features are considered the most important (at least as far as SU is concerned!). Don’t worry, this post isn’t quite as long. 🙂

The basic premise is this: We first of all need to calculate SU for each feature, with respect to the class label of the data items. In this scenario we used the first ‘feature’ in the EncodedSequence property (which is just a List of strings) to be the class label. So the calculation is SU(feature, 0) where feature is all features other than the class label itself of course.

The features are then ranked in descending SU order. An arbitrary cutoff threshold can be passed (usually just set to 0 initially), and any features that have an SU that falls under that cutoff is eliminated.

Then comes the part where redundant features are removed. FCBF marks feature B as essentially less useful than feature A if the SU between A and B is greater or equal to that between the class label and feature B. So in practice FCBF first selects the most highly ranked feature (A) and then calculates SU with the next most highly ranked (B). If it is greater or equal to B’s SU with the class label then B gets eliminated. FCBF then moves on to perform the same comparison with every feature. Once it gets to the end of the list it then moves to the next non-eliminated feature and starts the process again. By the end of this process it would usually be the case that the majority of features will have been eliminated. The ones that are left are considered to be the useful ones and are selected.

The code for this is shown below. Initially we create a class called UNCERTAINTY to hold the SU information about each feature.


class UNCERTAINTY
{
      public UNCERTAINTY(int _feature, double _su)
      {
          Feature = _feature;
          SymmetricalUncertainty = _su;
          Remove = false;
          AlreadySeen = false;
      }
      public int Feature;
      public double SymmetricalUncertainty;
      public bool Remove;
      public bool AlreadySeen;
       
};

The FCBF function below simply returns a list of feature numbers, which are the selected numbers. Note that this assumes that you are still using the variable _allDataItems to hold your data.

       /// <summary>
        /// Get the best features 
        /// </summary>
        /// <param name="threshold">FCBF threshold (0-1)</param>
        /// <returns>List of rows containing the variables, which is a subset of the set passed into the constructor</returns>
        public List<int> FCBF(double threshold)
        {      
            List<UNCERTAINTY> featuresFound = new List<UNCERTAINTY>();
 
            // Calculate the symmetric uncertainty between each feature and the class (the class is 'feature' 0).
            for (int featureCol = 1; featureCol < _allDataItems[0].EncodedSequence.Count; featureCol++)
            {
                // If symmetrical uncertainty of this feature with the class is greater than threshold then add it to list.
                double SU = SymmetricalUncertainty(featureCol, 0);
                if (SU > threshold)
                {
                    UNCERTAINTY u = new UNCERTAINTY(featureCol, SU);
                    featuresFound.Add(u);
                }
            }

            // Order the features above the threshold by descending SU
            featuresFound = featuresFound.OrderByDescending(x => x.SymmetricalUncertainty).ToList();

            while (true)
            {
                UNCERTAINTY uElement = featuresFound.Where(x => x.Remove == false && x.AlreadySeen == false).FirstOrDefault();
                if (uElement == null)
                    break;

                featuresFound[featuresFound.IndexOf(uElement)].AlreadySeen = true;

                for (int i = featuresFound.IndexOf(uElement) + 1; i < featuresFound.Count; i++)
                {
                    if (featuresFound[i].Remove == true) // Has been removed from list so ignore
                        continue;

                    double SU = SymmetricalUncertainty(featuresFound[i].Feature, uElement.Feature);
                   

                    if (SU >= featuresFound[i].SymmetricalUncertainty)
                    {
                        featuresFound[i].Remove = true;
                    }
                }
            }

            featuresFound = featuresFound.OrderBy(x => x.Feature).ToList();
            SelectedFeatures = featuresFound.Where(x => x.Remove == false).OrderBy(x => x.Feature).Select(x => x.Feature).ToList();
        
            return SelectedFeatures;
        }

I hope someone will find this useful!

Fast Correlation-Based Filter in C# Part 1: Information Entropy and Symmetrical Uncertainty

Imagine a case where you have a large set of data, where each row represents an item of data and each column an attribute of that item. It is more and more common nowadays to want to be able to automatically classify this data into categories by looking at their attributes and trying to find a pattern. A sub-discipline of AI called machine learning helps you do that. A classifier such as a neural network or a support vector machine will be trained on the dataset with the aim of creating a pattern recogniser. However in all but the most trivial cases you will usually have to first figure out which of the data attributes (or ‘features’) you want to use in order to train the classifier. This is because using many hundreds of features can a) take a long time to train, b) include lots of redundant features (i.e. their pattern duplicates that of other features, so doesn’t add any new information), c) be irrelevant (maybe that feature never changes at all), and d) actually confuse the training by providing information that might appear to be usable to the human eye but is actually just noise (think of the SETI program!).

There are many algorithms for finding the features that might be useful, but this article is about the Fast Correlation-Based Filter (FCBF) feature selection technique first introduced by Yu and Liu * . FCBF is often used in bioinformatics research, which is a commonly used domain for machine learning. Note that FCBF will only work reliably with discrete data, not continuous. This means that the values of each feature need to be put in ‘bins’. If your data is continuous in nature then there are binning algorithms (aka discretisation filters) that you can use that transform the values of each feature into the label of the appropriate bin. These are outside the scope of this post however the free Weka software contains at least one.

I will show how FCBF can be implemented in C#. Note that if you know C# then you will certainly see places where this can be more efficient (I’ve highlighted some in the text), but this is deliberate in order to be more readable. This first post will concentrate on the concept of information entropy and symmetrical uncertainty, which are used by FCBF in order to provide a ranking of the information content of the features. The next article will show how FCBF then uses the ranking to eliminate features.

For this example I am going to assume that a record (i.e. item of data) is called FeatureSequence, and there is one property containing the features for this item called EncodedSequence which is a List of strings. The first element of this list is always the class (category) label for this item of data. If you know that you are always going to have a fixed number of features then you could of course create a separate property for each, but that might become very big and inflexible very quickly!


public class FeatureSequence{

     // Remember that the first item is the class label
     public List<string> EncodedSequence {get;set;} 
 }

You need to read in your data and populate an instance of the above class for each data item, to create a list of FeatureSequences. I’m assuming that you already have enough C# knowledge to be able to do this!

Now comes a little bit of maths. Don’t worry, this is just to illustrate – the code comes in a minute! Like most maths notation, it is really just a shorthand for the many steps that we need to do, and whereas papers and Wikipedia will often miss out the crucial bit of maths information you need, I won’t :-). We will need to calculate the symmetrical uncertainty (SU) value for each feature. The mathematical formula for this is below, where H(X) is the information entropy (i.e. amount of information) of a feature X and H(X,Y) is the joint information entropy of feature X and feature Y:

SU(X,Y)=2(\frac{H(X)+H(Y)-H(X,Y)}{H(X)+H(Y)})

Of course X or Y can also be the classification label of the data too, which is why I’ve implemented class label as simply another feature in EncodedSequence in the code above. As an aside, the numerator is also the formula for mutual information (i.e. the amount that knowing one feature gives information about the other). SU is a little bit of a misnomer (in plain English) because SU actually increases the more certain you can be that feature X helps predict Y and vice-versa, i.e. as you become less ‘uncertain’ SU increases!

These are the formula for information entropy H(X) and joint information entropy H(X,Y). p(x) is the probability function i.e. the probability that value x will appear in the feature being examined, and p(x, y) is the joint probability function, i.e. the probability that the value x will appear in the first feature being examined when the value in second feature is y. Hopefully you understand that, but as a quick example if there are 10 items in your feature, and a particular value appears 3 times, then the probability of that value is 3/10 or 0.3.

H(X)=-\sum_x p(x)log_2 p(x)
H(X,Y)=-\sum_x \sum_y p(x,y)log_2 p(x,y)

So to start with we need to calculate entropy. First of all declare a class called FREQUENCY that will hold a list of all the values for a feature and the number of times that value appears.

public class FREQUENCY
{
     public string FeatureValue;
     public string Conditioner; // for joint probability.
     public double ValueCount;
} ;

The following function is used to calculate entropy. We simply loop through each data item, recording the number of times a particular value appears in a list of FREQUENCY. Then we loop through that frequency list, performing the entropy calculation H(X) above on each item and adding them together. In the code I use logarithm to the base 2. This results in an entropy measure in units of measure bits. Natural log is also commonly used, in which case the unit of measure is nats.

// Somewhere else in your class declare and populate a list of FeatureSequences containing your data:
private List<FeatureSequence> _allDataItems; 

 /// <summary>
 /// Find the Shannon entropy of the passed feature
 /// </summary>
 /// <param name="feature">Column number of variable (start at 0)</param>
 /// <returns>Entropy as a double</returns>
 public double Entropy(int feature)
 {
      double entropy = 0;
      List<FREQUENCY> frequencyList = new List<FREQUENCY>();
            
      // First count the number of occurances of each value
      // Go through each feature list (i.e. row) on the dataset
      for (int i = 0; i < _allDataItems.Count; i++)
      {
               
          // If the frequency list already has a place for this value then increment its count
          FREQUENCY freq = frequencyList.Where(x => x.FeatureValue == _allDataItems[i].EncodedSequence[feature]).FirstOrDefault();
          if (freq != null)
          {
               freq.ValueCount++;
          }
          // else add a new item to the frequency list
          else
          {
               FREQUENCY newFreq = new FREQUENCY();
               newFreq.FeatureValue = _allDataItems[i].EncodedSequence[feature];
               newFreq.ValueCount = 1;
               frequencyList.Add(newFreq);
          }
      }

      // For each item on the frequency list...

      for (int i = 0; i < frequencyList.Count; i++)
      {
          // Calculate the probability
          double probability = (double)frequencyList[i].ValueCount / (double)frequencyList.Sum(x => x.ValueCount); 

                
          // increase the entropy value
          entropy += (probability * Math.Log(probability, 2));
          // Note: can also use entropy += (probability * Math.Log((1/probability), 2));
      }

      return entropy*-1;
 }

The joint entropy function is very similar. Instead of passing 1 feature you pass 2. Then the list of FREQUENCY records the number of times a combination of the first and second feature occurs. Otherwise it is just the same.

/// <summary>
/// Return the joint entropy of 2 features.
/// </summary>
/// <param name="firstFeature">Column number of first variable (start at 0)</param>
/// <param name="secondFeature">Column number of second variable (start at 0)</param>
/// <returns>Joint entropy as a double</returns>
public double JointEntropy(int firstFeature, int secondFeature)
{
     double jointEntropy = 0;
     List<FREQUENCY> frequencyList = new List<FREQUENCY>();
            
     // First count the number of occurances of each value of feature 1 for each value of feature 2 
     // Go through each feature list (i.e. row) on the dataset
     for (int i = 0; i < _allDataItems.Count; i++)
     {
          // If the frequency list already has a place for this value then increment its count
          FREQUENCY freq = frequencyList.Where(x => x.FeatureValue == _allDataItems[i].EncodedSequence[firstFeature] &&
                                               x.Conditioner == _allDataItems[i].EncodedSequence[secondFeature]).FirstOrDefault();
           if (freq != null)
           {
                freq.ValueCount++;
           }
           // else add a new item to the frequency list
           else
           {
               FREQUENCY newFreq = new FREQUENCY();
               newFreq.FeatureValue = _allDataItems[i].EncodedSequence[firstFeature];
               newFreq.Conditioner = _allDataItems[i].EncodedSequence[secondFeature];
               newFreq.ValueCount = 1;
               frequencyList.Add(newFreq);
           }
       }

       double total = frequencyList.Sum(x => x.ValueCount);

       // For each item on the frequency list...
       for (int i = 0; i < frequencyList.Count; i++)
       {
           // Calculate the probability
           double jointProbability = (double)frequencyList[i].ValueCount / (double)frequencyList.Sum(x => x.ValueCount);              
           // increase the entropy value
           jointEntropy += jointProbability * Math.Log(jointProbability,2);
       }

       return jointEntropy * -1;
}

Finally a function to produce symmetrical uncertainty.

/// <summary>
/// Returns the symmetrical uncertainty of the first variable with the second
/// </summary>
/// <param name="firstFeature">Column number of first variable (start at 0)</param>
/// <param name="secondFeature">Column number of second variable (start at 0)</param>
/// <returns>Symmetrical uncertainty as a double</returns>
public double SymmetricalUncertainty(int firstFeature, int secondFeature)
{
      double firstEntropy = Entropy(firstFeature);
      double secondEntropy = Entropy(secondFeature);
 
      return 2 * ((firstEntropy + secondEntropy - JointEntropy(firstFeature, secondFeature)) / 
                 (firstEntropy + secondEntropy)
                 );
}

You can probably see that ideally it would be more efficient to merge the two entropy functions, and calculate both of them at the same time, but that might have looked a bit confusing to the learner!

The next post will focus on how SU is used within FCBF.

* Yu, Lei and Liu, Huan,(2004) Efficient feature selection via analysis of relevance and redundancy. The Journal of Machine Learning Research (5) pp 1205-1224.

C# port of Gorodkin’s generalised MCC algorithm (RkCC)

This is something from my MSc project that I thought would be useful to share!

Matthew’s Correlation Coefficent (MCC) is a smart way of measuring the overall accuracy of a classification algorithm. Say you have some data and you want to classify it into two categories, A and B. In classification you initially ‘train’ a classifier and test it using a separate set of data. In both sets you obviously already know the class each data item falls into so you can test it. You could simply record the accuracy by noting the percentage of data items that were classified correctly. The trouble with this approach is that if you had (for instance) 90 items of class A and 10 of class B in your test set the classifier could in theory return an accuracy of 0.9 (90%) even though every one of class B were incorrectly classified! Even using a proper Accuracy measurement of True Positives + True Negatives / Total would still return 0.8. MCC is a clever method of measuring accuracy that takes the disparities of class size into account better. In this case it would return 0, and if only 1 class B item was classified correctly it would return 0.3.

Unfortunately MCC is only useful for binary classification problems such as above. As soon as you add a third class C or more then you can’t use it. A generalised method of MCC was therefore created by Jan Gorodkin, the mathematical details of which are on the paper at his website (http://rk.kvl.dk/). He also supplied some code for it in AWK, but I was using C# so needed to port it. The source code below is my translation, and it seems to get the same results as his!

To use it you need to pass a List of integer arrays to the CalculateMCC() method. The list should be in a format like this (example shown in the code too):

{ Array of class A results[Classed as A, Classed as B, Classed as C],
Array of class B results[Classed as A, Classed as B, Classed as C],
Array of class C results[Classed as A, Classed as B, Classed as C],
…. }

 double MCC = MCCCalculator.CalculateMCC(new List<int[]>() 
                      { new int[2] {90, 0}, new int[2]{9, 1} });

The class:

  public static class MCCCalculator
    {
        /// <summary>
        /// Return the generic MCC value based on Gorodkin (2004)
        /// See http://rk.kvl.dk/
        /// </summary>
        /// <param name="scores"></param>
        /// <returns></returns>
       
        public static double CalculateMCC(List<int[]> confusionMatrix)    
        {
            double MCC = 0;

            // calc total data samples
            int totalSamples = 0;
            for (int i = 0; i < confusionMatrix[0].Count(); i++)
            {
                totalSamples += confusionMatrix[i].Sum();
            }

            // calc trace (sum of true positives)
            int trace = 0;
            for (int i = 0; i < confusionMatrix[0].Count(); i++)
            {
                trace += confusionMatrix[i][i];
            }

            // sum row -> column dotproduct
            int rowcol_sumprod = 0;
            for (int row = 0; row < confusionMatrix.Count; row++)
            {
                for (int col = 0; col < confusionMatrix[0].Count(); col++)
                {
                    int[] rowArray = getRow(confusionMatrix, row);
                    int[] colArray = getCol(confusionMatrix, col);
                    rowcol_sumprod += dotProduct(rowArray, colArray);
                }
            }

            // sum row -> row dotproduct
            int rowrow_sumprod = 0;
            for (int row = 0; row < confusionMatrix.Count; row++)
            {
                for (int row2 = 0; row2 < confusionMatrix[0].Count(); row2++)
                {
                    int[] rowArray = getRow(confusionMatrix, row);
                    int[] rowArray2 = getRow(confusionMatrix, row2);
                    rowrow_sumprod += dotProduct(rowArray, rowArray2);
                }
            }

            // sum col -> col dotproduct
            int colcol_sumprod = 0;
            for (int col = 0; col < confusionMatrix.Count; col++)
            {
                for (int col2 = 0; col2 < confusionMatrix[0].Count(); col2++)
                {
                    int[] colArray = getCol(confusionMatrix, col);
                    int[] colArray2 = getCol(confusionMatrix, col2);
                    colcol_sumprod += dotProduct(colArray, colArray2);
                }
            }

            int cov_xy = (totalSamples * trace) - rowcol_sumprod;
            int cov_xx = (totalSamples * totalSamples) - rowrow_sumprod;
            int cov_yy = (totalSamples * totalSamples) - colcol_sumprod;

            double denominator = Math.Sqrt((double)cov_xx * (double)cov_yy);

            if (denominator == 0)
                MCC = 1;
            else
                MCC = cov_xy / denominator;

            return MCC;

        }

        /// <summary>
        /// Return the specified row from the confusion matrix
        /// </summary>
        /// <param name="confusionMatrix"></param>
        /// <param name="row"></param>
        /// <returns></returns>
        private static int[] getRow(List<int[]> confusionMatrix, int row)
        {
            int[] rowArray = new int[confusionMatrix[row].Count()];

            for (int i = 0; i < confusionMatrix[row].Count(); i++)
            {

                rowArray[i] = confusionMatrix[row][i];
            }

            return rowArray;
        }

        /// <summary>
        /// Return the specified column from the confusion matrix
        /// </summary>
        /// <param name="confusionMatrix"></param>
        /// <param name="row"></param>
        /// <returns></returns>
        private static  int[] getCol(List<int[]> confusionMatrix, int col)
        {
            int[] colArray = new int[confusionMatrix.Count()];

            for (int i = 0; i < confusionMatrix.Count(); i++)
            {

                colArray[i] = confusionMatrix[i][col];
            }

            return colArray;
        }

        /// <summary>
        /// Return the dotproduct of the two arrays.
        /// </summary>
        /// <param name="array1"></param>
        /// <param name="array2"></param>
        /// <returns></returns>
        private static int dotProduct(int[] array1, int[] array2)
        {
            int dotProduct = 0;

            for (int i = 0; i < array1.Count(); i++)
            {
                dotProduct += (array1[i] * array2[i]);
            }

            return dotProduct;
        }
      }
    }

The only slight possible buggette in the original I’ve carried over into this version deliberately, as I wasn’t quite sure whether it was intentional or not. When the denominator is 0 but there are more than 0 samples I’m pretty sure that it should be returning 0 rather than 1, however I’ll go with the experts for the moment! Just something to consider if using it.

Fisher-Yates shuffle as a generic list extension in C#

The Fisher-Yates shuffle is just a simple way of randomising the order of the contents of a list. For one of my last projects I had need to do this in C#, on lists of various types, so I decided to do it as an extension to a generic list, using templates. Now I should say from right off that Microsoft do not generally recommend that you extend existing .Net types, just in case they change them in the future which may break your code. However that wasn’t a worry for me in this project, and judging by the amount of people on the web giving examples of extending basic types and classes it isn’t a worry for them either. It looks cool anyway. 🙂

Here is the code for the extension:

 public static class ListExtensions
    {
        /// <summary>
        /// Shuffle this list
        /// </summary>
        public static void Shuffle<T>(this List<T> thisList, Random RandomNumberGenerator)
        {
            for (int i = thisList.Count - 1; i >= 0; i--)
            {
                int j = RandomNumberGenerator.Next(0, i);
                T tmp = thisList[i];
                thisList[i] = thisList[j];
                thisList[j] = tmp;
            }
        }

        /// <summary>
        /// Return a shuffled copy of this list (leaves this list as it was)
        /// </summary>
        public static List<T> ShuffleAndCopy<T>(this List<T> thisList, Random RandomNumberGenerator)
        {
            T[] shuffled = new T[thisList.Count];
            thisList.CopyTo(shuffled);
            for (int i = shuffled.Count() - 1; i >= 0; i--)
            {
                int j = RandomNumberGenerator.Next(0, i);
                T tmp = shuffled[i];
                shuffled[i] = shuffled[j];
                shuffled[j] = tmp;
            }

            return shuffled.ToList();
        }

        /// <summary>
        /// Shuffle this list
        /// </summary>
        public static void Shuffle<T>(this List<T> thisList)
        {
            Random RandomNumberGenerator = new Random();

            for (int i = thisList.Count - 1; i >= 0; i--)
            {
                int j = RandomNumberGenerator.Next(0, i);
                T tmp = thisList[i];
                thisList[i] = thisList[j];
                thisList[j] = tmp;
            }

        }

        /// <summary>
        /// Return a shuffled copy of this list (leaves this list as it was)
        /// </summary>
        public static List<T> ShuffleAndCopy<T>(this List<T> thisList)
        {
            Random RandomNumberGenerator = new Random();

            T[] shuffled = new T[thisList.Count];
            thisList.CopyTo(shuffled);
            for (int i = shuffled.Count() - 1; i >= 0; i--)
            {
                int j = RandomNumberGenerator.Next(0, i);
                T tmp = shuffled[i];
                shuffled[i] = shuffled[j];
                shuffled[j] = tmp;
            }

            return shuffled.ToList();
        }

       
    }

There are two methods, Shuffle() which shuffles the current list and ShuffleAndCopy() which leaves the current list as it is and returns a shuffled copy. The reason I did two overrides for each are perhaps obvious if you know how the random number generator works, but I’ll tell you anyway. 🙂 One of the methods declares a random number generator internally. This is OK provided that you are only likely to call it less than once a second. If you call it more frequently than that then the random number generator actually starts from the same number, so you find that the list are always shuffled in the same way. To get around that I created an override that allowed me to pass in a random number generator. In a Winforms app this could perhaps be declared within the main Program class, and in a web app perhaps within the Application state (I’ve only tried the former!).

Submitting a multipart AJAX form with CKEditor textarea

OK, the subject of this post is pretty niche, but that’s why I am posting about it because I do it so little I’d only forget about it for the next time!

With ASP.NET MVC if you want to allow a file upload on a form submitted by AJAX then you have to intercept the form submit event and do the submit it yourself via Javascript, rather than using MVC’s helpers. This is detailed in this useful Stackoverflow answer: How to do a aspnet mvc ajax form post with multipart form data

What the post doesn’t mention (as it wasn’t asked for!) was that if your form contains an instance of the popular rich text editor CKEditor then this code alone will not transfer the data the editor contains to the form. I think this must be because the CKEditor is waiting for the submit event to be triggered before it does the copy into the field.

It took a bit of digging, as I have not had to look at the CKEditor API before, but if you take the code from the above link as a start point, then add the following code to somewhere between the event.preventDefault() call and the ajax() call, it seems to work OK.


            for (instance in CKEDITOR.instances) {
                CKEDITOR.instances[instance].updateElement();
            }

Adding CKEditor to an ASP.MVC AJAX dialog

CKEditor is one of the most popular rich text editors for the Web out there, but although I’ve used it many times I’ve never added it to an MVC site before. Although following the instructions is simple enough for a static view, adding it to a dynamically created view within a jQuery dialog box proved a little bit trickier. As usual there are bits and bobs around on the Net about it, but never the whole thing in one place! So I’ve collated what I’ve found, and added some bits I discovered myself.

OK, so say you want to popup an Edit screen in a jQuery dialog, generating the Edit view to go within the dialog on-the-fly. Download the CKEditor and put the scripts into your site, it doesn’t matter what configuration you use. I usually just put it under the Scripts directory. Also download the jQuery plugin jquery.livequery and put it into your site.

You need the following script links in your page header. whatever.js is just the name of a JS file that you are going to create in a minute, obviously change the paths of the CKEditor and livequery scripts if necessary. It is crucial to include the jQuery adaptor.

<script src="@Url.Content("~/Scripts/jquery.livequery.min.js")" type="text/javascript"></script>
<script src="@Url.Content("~/Scripts/ckeditor/ckeditor.js")" type="text/javascript"></script>
<script src="@Url.Content("~/Scripts/ckeditor/adapters/jquery.js")" type="text/javascript"></script>
<script src="@Url.Content("~/Scripts/whatever.js")" type="text/javascript"></script>

To start with you would put an Ajax.ActionLink on your View with a call to the Edit screen a bit like this (Razor view syntax):

@Ajax.ActionLink("Edit", "Edit", "Whatever", new AjaxOptions { UpdateTargetId = "editWhatever", OnBegin = "showEdit()"})

Where editWhatever is just an empty div somewhere else on the page, I’ll come onto the showEdit() Javascript in a minute.

The Edit view (and view model) would be something like below. Note that for some reason your field must be an Html.TextArea or TextAreaFor, a regular HTML textarea tag does not work, even though it is fine to use that tag in a non-AJAX form.

@model MyProject.ViewModels.WhateverEditViewModel

@using (Ajax.BeginForm("Save", "Whatever", new AjaxOptions { UpdateTargetId = "editWhatever", OnSuccess = "hideEditOnSuccess()" })) 
{
    @Html.AntiForgeryToken()
    @Html.HiddenFor(m => m.Site.Code)

    <div class="question-editor-label" >
       @Html.LabelFor(model => model.WhateverPropertyToEdit)
    </div>
        
    <div class="section-editor-field" >
            @Html.TextAreaFor(model => model.WhateverPropertyToEdit)
     </div>
  
     @Html.ValidationSummary()

    
    <button title="Save" type="submit" value="Save"  >Save</button> 
    <button type="button" value="Cancel" onclick="doCancel()" title="Cancel" >Cancel</button>
}

And you could probably work out the view model, but here it is:

namespace MyProject.ViewModels
{
    public class WhateverEditViewModel
    {
        [Display(Name = "Whatever Description")]
        public string WhateverPropertyToEdit { get; set; }
    }
}

And remember the ‘Save’ controller action method needs to accept HTML, so you have to decorate it with the annotation [ValidateInput(false)].

[HttpPost]
[ValidateAntiForgeryToken]
[ValidateInput(false)]
public ActionResult Save(WhateverEditViewModel passedModel)
{
   ...
}

Below is the Javascript for the page contained in whatever.js. In the $(document).ready() function it is crucial to use livequery to detect the appearance of the field on the edit form, as it does not exist when the document is first created. Use the jQuery call to ckeditor() to launch CKEditor. The rest should hopefully be obvious.

$(document).ready(function () {

    $('#WhateverPropertyToEdit').livequery(function () {
        $('#WhateverPropertyToEdit').ckeditor();
    });
});

function showEdit() {
    $('#editWhatever').dialog(
        {
            modal: true,
            width: 1000,
            title: 'Edit Whatever',
            dialogClass: 'edit_whatever_class',
            ....
        });
}


function doCancel() {
    $('#editWhatever').dialog('close');
}

function hideEditOnSuccess() {
    if ($('#editWhatever').find(".validation-summary-errors").size() == 0) {
        $('#editWhatever').dialog('close');
        /* do anything to the 'host' screen you need to here. */
    }
}

And you should now see the editor appear on your dialog box.