VARUNA JAYASIRI

@vpj

Getting the first B2B customer

August 11, 2014

It took Forestpin almost 2 years to get its first big customer. Making the first sale is always hard. It was a tough ride; talking to a lot of potential customers, changing product strategy from time to time, taking up classes on pitching and of course a lot of programming. We learned a lot during the process.

Beginning

We started off in March 2012, with a meeting of a few of guys. Some of whom believed that there is a lack of tools to find fraud and irregularities in financial transactions.

The idea was to develop financial data analytics software to highlight anomalies.

First couple of months were spent on research. We talked to auditors, accountants and managers to see if they had the problem we were trying to solve. We also discussed solutions they were using to solve it.

Before starting on the product, we implemented a bunch of analyses to highlight anomalies in payments data; ran it on a large dataset and sent out compiled reports to those we talked before. We wanted to make sure that they wanted what we were building. It was quite useful since we also got feedback about other analyses they were using.

It was around May when we first started working on the product, and it was ready by mid October, 2012. Sales agents were approached, documentation and legal documents were prepared and every thing was ready; all we needed was customers. We wanted to get first few customers in Sri Lanka before selling in US. It was easier to support clients in Sri Lanka since our developers based in Sri Lanka.

Duplicates Test

Seeking customers

We decided to get sales through sales agents. It took us a lot of time to realize that it wouldn't work out with sales agents unless we get the first few sales ourselves. Losing a few months due to a bad decision is one of the terrible things that could happen to a start up - the finances drain away, founders get demotivated, and competition increases. Luckily, we managed to get a small manufacturing company in Sri Lanka through a contact. But we needed a few big Sri Lankan customers.

We pitched to companies through contacts, but it was hard - mostly because we were programmers and not marketers. Also we sometimes didn't get to pitch to decision makers. Then it takes a really long time before anything happens. Pitching without customer references made it even harder.

We didn't know how hard to push for sales; how often to call them? should we send a reminder? Pushing too much could be annoying, whilst waiting for long could reduce interest.

Meetings getting canceled and postponed (sometimes after you leave for the meeting) were very discouraging; perhaps because we don't come from a marketing background.

Forestpin Lite

One of the biggest difficulties in sales was proving that Forestpin could benefit companies. Trials weren't much useful since it needs hardware and connectors to their current databases.

Around May - June, 2013, we developed Forestpin Lite as a standalone application to analyze small datasets. Potential customers could scan a dataset with Forestpin Lite on a laptop and get an idea of how Forestpin works. It had a limited set of features, but gave a pretty good idea of what we had to offer. This was quite useful in getting the first big sale.

With Forestpin Lite, we approached the companies we had pitched earlier. We offered to analyze their data and give an overview. And it worked. The first company we approached (Hemas Holdings) liked it when they saw Forestpin Lite's analysis of *their data*. It pointed out areas that required attention.

Training

After doing a trial in August - September 2013, we offered a product training to Hemas Holdings risk team before they decided to purchase Forestpin. We believed it would help gain traction within their organization. We had to make the training useful for them even if they didn't purchase Forestpin.

Me and Chethiya had no experience in training. We were lucky that Sifaan (a professional trainer we've known for a long time) helped us prepare and deliver the training. I believe the training was one of the key factors in getting our first sale.

Next version of the software

Training was in late November 2013, and we had the second version of Forestpin ready when we did the training. They seemed quite impressed with the progress we had made from the first version they saw about 9 months back.

Timeseries Analysis - Redesign

All these factors helped us get the first big sale. Now it's up to us to make the existing customers really happy and look for referrals. Having a couple of reference customers helps tremendously in B2B sales.

What we should have done differently

  1. Develop Forestpin Lite at the beginning

    We should have started with Forestpin Lite. It would have much easier to validate the idea and find a product market fit. We spent a lot of time rewriting the entire system when we moved from first version to second. We would have saved all that time and gained traction faster if we had Forestpin Lite early on.

  2. Not approach sales agents before the first few sales

    Pitching to sales agents is harder than pitching to customers when you don't have reference customers. We lost at least 6 months while we were pitching to agents. If we had more discussions with potential customers and users instead of agents we would have at least got a lot of feedback to improve the product.

  3. Get in touch with user communities

    Although we contacted a few potential customers and users at the beginning, we didn't try to get in touch with the communities of these users. We should have got involved in their communities in social media; for example, Linkedin groups for auditors, accountants and finance managers. You can definitely get good insights by discussing with these groups.

  4. Talk to a lot of companies in parallel

    We initiated discussions with a lot of companies. But we tried to engage with only one company at a time from trial stage. It is a trade off between focusing on a company or dividing our limited resources among a few companies. But waiting till you finish the engagement with one company before moving on to next can be disastrous. Also there were so many gaps and delays during the engagement which would have let us work with at least 5 companies in parallel.

However, it wasn't that bad in our case since the first company we gave a trial decided to purchase Forestpin.

Things we learned

  1. Lot of people can influence the buying decision

    Unlike in consumer sales, in B2B sales a lot of people influence the buying decision. They have different requirements and responsibilities. So in order to make a sale happen smoothly all these parties should be kept satisfied. I think this is more relevant when the vendor doesn't have large portfolio.

    In our case, the finance team, internal auditors or risk team, IT team and the top management were all involved in the decision making. All though it was the top management that made the final decision, it was based on inputs of other parties.

    If the finance team looks at our product as something that will put more controls on them they are not going to like it. Similarly if the auditors or IT team sees it as something that is going to increase their work load they are not going to like it either.

    So the pitch needs to show that the product will benefit the organization as well as individual teams.

  2. Sales take a long time

    It takes a really long time. Sometimes more than an year. The organizations are busy with loads of work. And different teams become available at different times.

    The implementation and adaptation can also take a lot of time. We still don't have much experience at this stage, but almost all the organizations we spoke to had ongoing software implementation project that had spanned for more than 6 months.

  3. Urgency

    Sales are going to take forever if there is no sense of urgency. We haven't figured how to tackle this.

    At a relatively smaller company, the sale happened within a couple of weeks, because they wanted their data analyze their data quickly. Perhaps they lost money due to a fraud or an inefficiency in the recent past, and wanted to make sure everything else is clean. And they wanted to do it quickly before losing any more money.

    Having a this sort of a reason to make things happen fast can be very helpful in sales. This article by HubSpot illustrates how you can create urgency.

It took <<http://www.forestpin.com(Forestpin)>> almost 2 years to get its first big customer. Making the first sale is always hard. It was a tough ride; talking to a lot of potential customers, changing product strategy from time to time, taking up classes on pitching and of course a lot of programming. We learned a lot during the process. ###Beginning We started off in March 2012, with a meeting of a few of guys. Some of whom believed that there is a lack of tools to find fraud and irregularities in financial transactions. The idea was to develop financial data analytics software to highlight anomalies. First couple of months were spent on research. We talked to auditors, accountants and managers to see if they had the problem we were trying to solve. We also discussed solutions they were using to solve it. Before starting on the product, we implemented a bunch of analyses to highlight anomalies in payments data; ran it on a large dataset and sent out compiled reports to those we talked before. We wanted to make sure that they wanted what we were building. It was quite useful since we also got feedback about other analyses they were using. It was around May when we first started working on the product, and it was ready by mid October, 2012. Sales agents were approached, documentation and legal documents were prepared and every thing was ready; all we needed was customers. We wanted to get first few customers in Sri Lanka before selling in US. It was easier to support clients in Sri Lanka since our developers based in Sri Lanka. <!> !img/forestpin_2014/2013-duplicates.png(Duplicates Test) ###Seeking customers We decided to get sales through sales agents. It took us a lot of time to realize that it wouldn't work out with sales agents unless we get the first few sales ourselves. Losing a few months due to a bad decision is one of the terrible things that could happen to a start up - the finances drain away, founders get demotivated, and competition increases. Luckily, we managed to get a small manufacturing company in Sri Lanka through a contact. But we needed a few big Sri Lankan customers. We pitched to companies through contacts, but it was hard - mostly because we were programmers and not marketers. Also we sometimes didn't get to pitch to decision makers. Then it takes a really long time before anything happens. Pitching without customer references made it even harder. We didn't know how hard to push for sales; how often to call them? should we send a reminder? Pushing too much could be annoying, whilst waiting for long could reduce interest. Meetings getting canceled and postponed (sometimes after you leave for the meeting) were very discouraging; perhaps because we don't come from a marketing background. ###Forestpin Lite One of the biggest difficulties in sales was proving that Forestpin could benefit companies. Trials weren't much useful since it needs hardware and connectors to their current databases. Around May - June, 2013, we developed Forestpin Lite as a standalone application to analyze small datasets. Potential customers could scan a dataset with Forestpin Lite on a laptop and get an idea of how Forestpin works. It had a limited set of features, but gave a pretty good idea of what we had to offer. This was quite useful in getting the first big sale. With Forestpin Lite, we approached the companies we had pitched earlier. We offered to analyze their data and give an overview. And it worked. The first company we approached (<<http://goo.gl/TcqxxT(Hemas Holdings)>>) liked it when they saw Forestpin Lite's analysis of *their data*. It pointed out areas that required attention. ###Training After doing a trial in August - September 2013, we offered a product training to <<http://goo.gl/TcqxxT(Hemas Holdings)>> risk team before they decided to purchase Forestpin. We believed it would help gain traction within their organization. We had to make the training useful for them even if they didn't purchase Forestpin. Me and Chethiya had no experience in training. We were lucky that <<http://almostunreal.org/(Sifaan)>> (a professional trainer we've known for a long time) helped us prepare and deliver the training. I believe the training was one of the key factors in getting our first sale. ###Next version of the software Training was in late November 2013, and we had the second version of Forestpin ready when we did the training. They seemed quite impressed with the progress we had made from the first version they saw about 9 months back. <!> !img/forestpin_2014/2014-timeseries-correlation.png(Timeseries Analysis - Redesign) All these factors helped us get the first big sale. Now it's up to us to make the existing customers really happy and look for referrals. Having a couple of reference customers helps tremendously in B2B sales. ##What we should have done differently - **Develop Forestpin Lite at the beginning We should have started with Forestpin Lite. It would have much easier to validate the idea and find a product market fit. We spent a lot of time rewriting the entire system when we moved from first version to second. We would have saved all that time and gained traction faster if we had Forestpin Lite early on. - **Not approach sales agents before the first few sales Pitching to sales agents is harder than pitching to customers when you don't have reference customers. We lost at least 6 months while we were pitching to agents. If we had more discussions with potential customers and users instead of agents we would have at least got a lot of feedback to improve the product. - **Get in touch with user communities Although we contacted a few potential customers and users at the beginning, we didn't try to get in touch with the communities of these users. We should have got involved in their communities in social media; for example, Linkedin groups for auditors, accountants and finance managers. You can definitely get good insights by discussing with these groups. - **Talk to a lot of companies in parallel We initiated discussions with a lot of companies. But we tried to engage with only one company at a time from trial stage. It is a trade off between focusing on a company or dividing our limited resources among a few companies. But waiting till you finish the engagement with one company before moving on to next can be disastrous. Also there were so many gaps and delays during the engagement which would have let us work with at least 5 companies in parallel. However, it wasn't that bad in our case since the first company we gave a trial decided to purchase Forestpin. ##Things we learned - **Lot of people can influence the buying decision Unlike in consumer sales, in B2B sales a lot of people influence the buying decision. They have different requirements and responsibilities. So in order to make a sale happen smoothly all these parties should be kept satisfied. I think this is more relevant when the vendor doesn't have large portfolio. In our case, the finance team, internal auditors or risk team, IT team and the top management were all involved in the decision making. All though it was the top management that made the final decision, it was based on inputs of other parties. If the finance team looks at our product as something that will put more controls on them they are not going to like it. Similarly if the auditors or IT team sees it as something that is going to increase their work load they are not going to like it either. So the pitch needs to show that the product will benefit the organization as well as individual teams. - **Sales take a long time It takes a really long time. Sometimes more than an year. The organizations are busy with loads of work. And different teams become available at different times. The implementation and adaptation can also take a lot of time. We still don't have much experience at this stage, but almost all the organizations we spoke to had ongoing software implementation project that had spanned for more than 6 months. - **Urgency Sales are going to take forever if there is no sense of urgency. We haven't figured how to tackle this. At a relatively smaller company, the sale happened within a couple of weeks, because they wanted their data analyze their data quickly. Perhaps they lost money due to a fraud or an inefficiency in the recent past, and wanted to make sure everything else is clean. And they wanted to do it quickly before losing any more money. Having a this sort of a reason to make things happen fast can be very helpful in sales. <<http://blog.hubspot.com/sales/create-sense-of-urgency-in-prospect-roleplay(This article by HubSpot)>> illustrates how you can create urgency.

Black or White

July 27, 2014

Which background is better? Most text editors and document readers have light backgrounds. But there's a lot of programmers who use dark backgrounds. And there are a some analytics and dashboard applications with dark backgrounds.

img/dark.png

Stephan Few has criticized using dark backgrounds for dashboards in his book Information Dashboard Design, citing an image of Roambi. By the way, Roambi later moved to a white background and then brought back the option of dark backgrounds.

Have you noticed that many business intelligence (BI) software companies have introduced black screens as the standard for mobile devices? Is this because mobile devices work better with black screens? If you look for the research, as I have, it isn’t likely that you’ll find any.

No one seems to question the efficacy of light backgrounds for reading text. Why the difference? Text and graphics both involve objects that are constructed of lines and filled in areas of color. Do they differ in a way that demands a different background color? I don’t think so.

In the above article he discusses arguments by vendors who use dark backgrounds. Most of the arguments he has considered are technical (battery saving, sunlight reflection) and doesn't discuss which is better if technology wasn't a constraint. I feel the discussion is biased towards light backgrounds.

The article is based on the idea that because white backgrounds are used in editors and readers for a long time, it is better to stick to it until there is solid evidence that dark backgrounds are better.

Editing and reading software use white backgrounds to mimic paper. Paper is whitish and has been there for centuries. However that was probably because of technological limitations on paper and ink colors. It was not a decision based evidence that white backgrounds are perceptually better.

Edward Tufte has brought it up in one of his discussions.

The usual metaphor for screens (projection and computer) these days seems to be black type on a white background, that is, a paper metaphor. This sometimes results in video glare, with lots of rays coming from the background. Sometimes the old fashioned computer screen seems less tiring, showing lit-up text on a dead backround.

Then he continues to discuss why his website has a light background.

But my metaphor here is paper, like a book. If reproduced on a dark background, my images (which are generally very light in value) would come blaring and glaring out of the dark surround.

Light backgrounds produce video glare. So turn the screen brightness at night and after working many hours at the computer. It is often useful to dull down the electric-blue white of the computer screen with a soft background tone, as done here.

I feel what needs to be considered is how people use your application. If they switch between your application and other applications with light backgrounds (e.g. websites, editors, physical paper), then you should consider a light background. It won't happen the other way around often since majority of software uses light backgrounds, with the exception of photo editors.

If the software is used in isolation, dark or light wouldn't make a big difference. Then the factors such as device, power consumption, coolness (aesthetics) and glare can be considered.

img/light.png

These dark and light screenshots are still WIP. We were trying different backgrounds to see which is better. When placed inside this blog, the screenshot with light background looks a lot better because of the surrounding.*

Which background is better? Most text editors and document readers have light backgrounds. But there's a lot of programmers who use dark backgrounds. And there are a some analytics and dashboard applications with dark backgrounds. <!> !img/dark.png Stephan Few has criticized using dark backgrounds for dashboards in his book <<http://www.amazon.com/Information-Dashboard-Design-At-Glance/dp/1938377001(Information Dashboard Design)>>, citing an image of <<http://roambi.com(Roambi)>>. By the way, Roambi later moved to a white background and then brought back the option of dark backgrounds. +++ Have you noticed that many business intelligence (BI) software companies have introduced black screens as the standard for mobile devices? Is this because mobile devices work better with black screens? If you look for the research, as I have, it isn’t likely that you’ll find any. No one seems to question the efficacy of light backgrounds for reading text. Why the difference? Text and graphics both involve objects that are constructed of lines and filled in areas of color. Do they differ in a way that demands a different background color? I don’t think so. >>> **<<http://www.perceptualedge.com/blog/?p=1445(Stephan Few)>> In the above article he discusses arguments by vendors who use dark backgrounds. Most of the arguments he has considered are technical (battery saving, sunlight reflection) and doesn't discuss which is better if technology wasn't a constraint. I feel the discussion is biased towards light backgrounds. The article is based on the idea that because white backgrounds are used in editors and readers for a long time, it is better to stick to it until there is solid evidence that dark backgrounds are better. Editing and reading software use white backgrounds to mimic paper. Paper is whitish and has been there for centuries. However that was probably because of technological limitations on paper and ink colors. It was not a decision based evidence that white backgrounds are perceptually better. Edward Tufte has brought it up in one of his discussions. +++ The usual metaphor for screens (projection and computer) these days seems to be black type on a white background, that is, a paper metaphor. This sometimes results in video glare, with lots of rays coming from the background. Sometimes the old fashioned computer screen seems less tiring, showing lit-up text on a dead backround. >>> **<<http://www.edwardtufte.com/bboard/q-and-a-fetch-msg?msg_id=000082(Edward Tufte)>> Then he continues to discuss why his website has a light background. +++ But my metaphor here is paper, like a book. If reproduced on a dark background, my images (which are generally very light in value) would come blaring and glaring out of the dark surround. Light backgrounds produce video glare. So turn the screen brightness at night and after working many hours at the computer. It is often useful to dull down the electric-blue white of the computer screen with a soft background tone, as done here. >>> **<<http://www.edwardtufte.com/bboard/q-and-a-fetch-msg?msg_id=0000M0(Edward Tufte)>> I feel what needs to be considered is how people use your application. If they switch between your application and other applications with light backgrounds (e.g. websites, editors, physical paper), then you should consider a light background. It won't happen the other way around often since majority of software uses light backgrounds, with the exception of photo editors. If the software is used in isolation, dark or light wouldn't make a big difference. Then the factors such as device, power consumption, coolness (aesthetics) and glare can be considered. <!> !img/light.png --These dark and light screenshots are still WIP. We were trying different backgrounds to see which is better. When placed inside this blog, the screenshot with light background looks a lot better because of the surrounding.*

SEO Crap

July 27, 2014

SEO is dead - at least much different from what it was known to be. But there are a plenty of consultants who market SEO, as if it is something that is hard to get right. Many organizations fall for it.

Search engines do not want to show web sites with some special SEO sauce to be on top. They want websites that people look for.

There were days when you could use neat little tricks (link farms, etc) to go higher on search results. This was possible because it is hard for an algorithm to figure out what's better content. Now search engine algorithms have gotten better and they don't fall for these tricks.

Yet I knew from experience that the real secret to SEO was not about tricks but about making your site the best it could be for your users while keeping the search engines in mind. It was true when I started doing SEO and it’s true now. Doing that always, always, always works to bring more targeted search engine traffic to your website. But, sadly, the tricks that the other SEO people were doing and writing about also worked, albeit temporarily.

So as long as your website

  1. has good content which people like,
  2. is accessible to internet users,
  3. uses proper HTML syntax, and
  4. is crawlable

you don't have to worry about getting consultancy for SEO. If your web developer can't get 2, 3 and 4 right, you should get a better developer; often it requires effort to get those wrong.

This document by Google is a pretty good guide about these stuff.

<<http://www.forbes.com/sites/jaysondemers/2013/11/13/is-seo-dead/(SEO is dead)>> - at least much different from what it was known to be. But there are a plenty of consultants who market SEO, as if it is something that is hard to get right. Many organizations fall for it. Search engines do not want to show web sites with some special SEO sauce to be on top. They want websites that people look for. There were days when you could use neat little tricks (link farms, etc) to go higher on search results. This was possible because it is hard for an algorithm to figure out what's better content. Now search engine algorithms have gotten better and they don't fall for these tricks. +++ Yet I knew from experience that the real secret to SEO was not about tricks but about making your site the best it could be for your users while keeping the search engines in mind. It was true when I started doing SEO and it’s true now. Doing that always, always, always works to bring more targeted search engine traffic to your website. But, sadly, the tricks that the other SEO people were doing and writing about also worked, albeit temporarily. <<http://whatdidyoudowithjill.com/leaving-seo-post/(Jill Whalen)>> <<http://en.wikipedia.org/wiki/Jill_Whalen(SEO Consultant)>> So as long as your website - has good content which people like, - is accessible to internet users, - uses proper HTML syntax, and - <<https://support.google.com/webmasters/answer/182072?hl=en(is crawlable)>> >>> <<https://static.googleusercontent.com/external_content/untrusted_dlcp/www.google.com/en//webmasters/docs/search-engine-optimization-starter-guide.pdf(This document)>> by Google is a pretty good guide about these stuff. you don't have to worry about getting consultancy for SEO. If your web developer can't get 2, 3 and 4 right, you should get a better developer; often it requires effort to get those wrong.

2 to 10 times faster HTML animations

May 10, 2014

When animating/moving HTML elements, setting their position with -webkit-transform: matrix3d() gives two to ten times faster frame rates compared to top/left or -webkit-transform: matrix(). On mobile devices you can observe native app like performance with transform: matrix3d.

Demo

It moves a panel that contains 10,000 div elements. On my macbook air running chrome, transform:matrix3d gives 30 - 40 fps while position:top/left and transform:matrix gives 10-15 fps. You can check the frame rate by selecting Show FPS meter on Rendering tab of chrome developer tools.

The following image shows snapshot of frames timeline with top/left.

Timeline for top/left

This is the snapshot of frames timeline with transform:matrix3d.

Timeline for transform:matrix3d

transform: matrix3d takes out the paint operation which gives increased frame.

The 3d translation layers offer a way of pre-blitting all the stuff inside the DOM element into a layer, which is therefore available for direct blitting operations inside the render tree. Well, at least that's the conceptual idea behind.

martensms on html5gamedevs.com

The frame rates with transform:matrix3d becomes about 10 times higher than other methods as the content of the panel which moves gets heavier. I tried with about a 1,000 small SVG graphs with 1,000 data points.

Most of the discussions I found on the internet were comparing top/left to transform: matrix or transform: translate 1, 2, but not so much information about using transform: matrix3d.

1 An old discussion on Hacker News about transform: translate and top/left (3d transformations are not considered) Myth busting the HTML5 performance of transform:translate vs. top/left

2 Some other advantages of using transform Why Moving Elements With Translate

When animating/moving HTML elements, setting their position with ``-webkit-transform: matrix3d()`` gives two to ten times faster frame rates compared to ``top``/``left`` or ``-webkit-transform: matrix()``. On mobile devices you can observe native app like performance with ``transform: matrix3d``. ##Demo >>> <<http://bl.ocks.org/vpj/c5468e4593319009320e(Demo 1)>> <<http://bl.ocks.org/vpj/c5468e4593319009320e(Demo 2)>> It moves a panel that contains 10,000 ``div`` elements. On my macbook air running chrome, ``transform:matrix3d`` gives 30 - 40 fps while ``position:top/left`` and ``transform:matrix`` gives 10-15 fps. You can check the frame rate by selecting **Show FPS meter** on **Rendering** tab of chrome developer tools. The following image shows snapshot of frames timeline with ``top``/``left``. !img/transform/topleft.png(Timeline for top/left) This is the snapshot of frames timeline with ``transform:matrix3d``. !img/transform/matrix3d.png(Timeline for transform:matrix3d) ``transform: matrix3d`` takes out the paint operation which gives increased frame. +++ The 3d translation layers offer a way of pre-blitting all the stuff inside the DOM element into a layer, which is therefore available for direct blitting operations inside the render tree. Well, at least that's the conceptual idea behind. **martensms** on <<http://www.html5gamedevs.com/topic/171-myth-busting-the-html5-performance-of-transformtranslate-vs-topleft/(html5gamedevs.com)>> The frame rates with ``transform:matrix3d`` becomes about --10 times-- higher than other methods as the content of the panel which moves gets heavier. I tried with about a 1,000 small SVG graphs with 1,000 data points. Most of the discussions I found on the internet were comparing ``top``/``left`` to ``transform: matrix`` or ``transform: translate`` ^^1, 2^^, but not so much information about using ``transform: matrix3d``. >>> ^^1^^ An old discussion on Hacker News about ``transform: translate`` and ``top``/``left`` (3d transformations are not considered) <<https://news.ycombinator.com/item?id=5301731(Myth busting the HTML5 performance of transform:translate vs. top/left)>> ^^2^^ Some other advantages of using ``transform`` <<http://www.paulirish.com/2012/why-moving-elements-with-translate-is-better-than-posabs-topleft/(Why Moving Elements With Translate() Is Better Than Pos:abs Top/left)>>

Weya.coffee

March 19, 2014

Weya.coffee is a lightweight library with no dependencies to generate DOM elements. We developed it to replace Coffeecup as a client side template engine. Because of its simplicity and performance, we are also using Weya to replace DOM manipulation of d3.js in data visualizations.

Here's a small example to show the usage.

userElems = []
Weya container, ->
 @div ".users", ->
  for user, i in users
   userDiv = @div '.user', on: {click: editUser}, ->
    name = @span ".name", user.name
    @span ".phone", user.phone
    if v.image?
     @img src: user.image

   userDiv.userId = i
   userElems.push user: user, name: name

The above code creates a list of users. It binds the data to the dom element userDiv.userId = i and also keeps track of all the DOM elements in userElems. This is important if you want to manipulate the DOM without reloading the entire user list, for example if a name of a user changes you could change it with userElems[changedUserId].name.textContent = changedUserName.

Is it a template engine?

Weya is quite similar to Coffeecup in terms of the syntax. But it's much faster, so it won't fail if you have lots of elements.

Also, Weya lets you register event handlers. I feel this is much cleaner than registering events later with CSS selectors, and it's easier to maintain the code since events are register within the DOM creation code.

Can it replace d3.js?

We use weya to replace most all the d3.js DOM manipulation.

Code with Weya is simpler, shorter and nicely intended. Here's the code that draws bar chart in this example.

Weya svg, ->
 for d in data
  @g ".g", transform: "translate(#{x0 d.State},0)", ->
   for age in d.ages
    @rect
     width: x1.rangeBand()
     x: x1 age.name
     y: y age.value
     height: height - y age.value
     fill: color age.name

 for d, i in ageNames.slice().reverse()
  @g ".legend", transform: "translate(0,#{i * 20})", ->
   @rect x: width - 18, width: 18, height: 18, fill: color d
   @text
    x: width - 24, y: 9, dy: ".35em"
    style: {'text-anchor': "end"}, text: d

Here's the code that does the same with d3.js.

var state = svg.selectAll(".state")
    .data(data)
  .enter().append("g")
    .attr("class", "g")
    .attr("transform", function(d) { return "translate(" + x0(d.State) + ",0)"; });

state.selectAll("rect")
    .data(function(d) { return d.ages; })
  .enter().append("rect")
    .attr("width", x1.rangeBand())
    .attr("x", function(d) { return x1(d.name); })
    .attr("y", function(d) { return y(d.value); })
    .attr("height", function(d) { return height - y(d.value); })
    .style("fill", function(d) { return color(d.name); });

var legend = svg.selectAll(".legend")
    .data(ageNames.slice().reverse())
  .enter().append("g")
    .attr("class", "legend")
    .attr("transform", function(d, i) { return "translate(0," + i * 20 + ")"; });

legend.append("rect")
    .attr("x", width - 18)
    .attr("width", 18)
    .attr("height", 18)
    .style("fill", color);

legend.append("text")
    .attr("x", width - 24)
    .attr("y", 9)
    .attr("dy", ".35em")
    .style("text-anchor", "end")
    .text(function(d) { return d; });

Another problem we solved with Weya is that d3.js draws all the elements that are represented by the data at once. And with Weya we can draw progressively - this is quite useful when you have a lot of data and you don't won't the interface to go unresponsive until everything is drawn. Here's a small example to show the point.

i = 0
data = ...

draw = ->
 return if i is data.length

 d = data[i]
 Weya container, ->
  @div '.user', ->
   ...

 i++
 requestAnimationFrame draw

draw()

The disadvantage of Weya over d3.js is that it doesn't bind data to DOM elements like d3.js does. So you can't use enter(), exit() and updates when data changes. But most users rarely need these features. We use Weya with our own data bindings with DOM elements (as in the first example with userElems), and we find it simpler than enter() and exit().

Weya.coffee is a lightweight library with no dependencies to generate DOM elements. We developed it to replace <<https://github.com/gradus/coffeecup(Coffeecup)>> as a client side template engine. Because of its simplicity and performance, we are also using Weya to replace DOM manipulation of <<http://d3js.org/(d3.js)>> in data visualizations. >>> <<< <iframe src="https://ghbtns.com/github-btn.html?user=vpj&repo=weya&type=star&count=true&size=large" frameborder="0" scrolling="0" width="160px" height="30px"></iframe> <<http://jsperf.com/weya-jquery-d3-coffeecup(Performance comparison among d3, coffeecup and weya)>> <<http://bl.ocks.org/vpj/9636655(Bar chart example)>> Here's a small example to show the usage. ```coffee userElems = [] Weya container, -> @div ".users", -> for user, i in users userDiv = @div '.user', on: {click: editUser}, -> name = @span ".name", user.name @span ".phone", user.phone if v.image? @img src: user.image userDiv.userId = i userElems.push user: user, name: name The above code creates a list of users. It binds the data to the dom element ``userDiv.userId = i`` and also keeps track of all the DOM elements in ``userElems``. This is important if you want to manipulate the DOM without reloading the entire user list, for example if a name of a user changes you could change it with ``userElems[changedUserId].name.textContent = changedUserName``. #Is it a template engine? Weya is quite similar to Coffeecup in terms of the syntax. But it's much faster, so it won't fail if you have lots of elements. >>> <<http://jsperf.com/weya-jquery-d3-coffeecup(Performance comparison among d3, coffeecup and weya)>> Also, Weya lets you register event handlers. I feel this is much cleaner than registering events later with CSS selectors, and it's easier to maintain the code since events are register within the DOM creation code. #Can it replace d3.js? We use weya to replace most all the d3.js DOM manipulation. Code with Weya is simpler, shorter and nicely intended. Here's the code that draws bar chart in <<http://bl.ocks.org/vpj/9636655(this example)>>. ```coffee Weya svg, -> for d in data @g ".g", transform: "translate(#{x0 d.State},0)", -> for age in d.ages @rect width: x1.rangeBand() x: x1 age.name y: y age.value height: height - y age.value fill: color age.name for d, i in ageNames.slice().reverse() @g ".legend", transform: "translate(0,#{i * 20})", -> @rect x: width - 18, width: 18, height: 18, fill: color d @text x: width - 24, y: 9, dy: ".35em" style: {'text-anchor': "end"}, text: d Here's the code that does the <<http://bl.ocks.org/mbostock/388705(same with d3.js)>>. ```javascript var state = svg.selectAll(".state") .data(data) .enter().append("g") .attr("class", "g") .attr("transform", function(d) { return "translate(" + x0(d.State) + ",0)"; }); state.selectAll("rect") .data(function(d) { return d.ages; }) .enter().append("rect") .attr("width", x1.rangeBand()) .attr("x", function(d) { return x1(d.name); }) .attr("y", function(d) { return y(d.value); }) .attr("height", function(d) { return height - y(d.value); }) .style("fill", function(d) { return color(d.name); }); var legend = svg.selectAll(".legend") .data(ageNames.slice().reverse()) .enter().append("g") .attr("class", "legend") .attr("transform", function(d, i) { return "translate(0," + i * 20 + ")"; }); legend.append("rect") .attr("x", width - 18) .attr("width", 18) .attr("height", 18) .style("fill", color); legend.append("text") .attr("x", width - 24) .attr("y", 9) .attr("dy", ".35em") .style("text-anchor", "end") .text(function(d) { return d; }); Another problem we solved with Weya is that d3.js draws all the elements that are represented by the data at once. And with Weya we can draw progressively - this is quite useful when you have a lot of data and you don't won't the interface to go unresponsive until everything is drawn. Here's a small example to show the point. ```coffee i = 0 data = ... draw = -> return if i is data.length d = data[i] Weya container, -> @div '.user', -> ... i++ requestAnimationFrame draw draw() The disadvantage of Weya over d3.js is that it doesn't bind data to DOM elements like d3.js does. So you can't use ``enter()``, ``exit()`` and updates when data changes. But most users rarely need these features. We use Weya with our own data bindings with DOM elements (as in the first example with ``userElems``), and we find it simpler than ``enter()`` and ``exit()``.

People who are really serious about software should write their libraries

February 19, 2014

We have been using a lot of tools and libraries in our software, and have replaced a number of them with our code. Libraries makes it easy to get things done, and to ship early. But from my experience, having a third-party library or a tool dominate a core part of your software is not a good idea.

People who are really serious about software should make their own hardware.

- Alan Kay

We have moved away from a number of libraries (and frameworks and platforms) over the last couple of years. This may sound like a lot of hate, but it is not so. We still love those libraries and use them on a lot of smaller projects. But when your product grows and you want to mold it the way you want, sometimes libraries stand in your way. Some of the decisions we made could be wrong because we didn't understand the library properly. But we spent a lot of time trying to stick to those libraries before replacing them.

Writing your code instead of using libraries takes a lot of weight off the product. Most of these libraries are written by really good programmers to be used in a wide range of scenarios, and there is a pretty good chance that you won't need them all. So getting inspiration from them and writing your own stuff will make the software lighter while making things work the way you want. A lot of programmers are likely to write their own code to replace libraries at some point, and that is probably why there are a number of libraries doing almost exact same thing in slightly different ways.

One of the traps you can fall into when writing our own code is that you might just end up reinventing the wheel. There is a chance that you replace a well written library with a small piece of code initially, but with time end up improving your code to do exactly the same thing the library you threw away did.

jqPlot

Our first release of Forestpin used jqPlot for most of our charts. Some not-so-ordinary visualizations were made with protovis1. jqPlot helped us quickly develop a product (MVP) to show potential customers, but introduced a lot of constraints. We made some changes to jqPlot to customize some of the charts, but it wasn't enough. The next version of Forestpin used d3.js for all the visualizations2, which gave us more control. We never used jqPlot any project, thereafter.

Backbone.js

Backbone was used at Forestpin as well as at nearby.lk. What triggered us to code a replacement for Backbone was that it didn't save states in HTML5 history3. We weren't using most of the features of Backbone too, so the replacement, Sweet.js, was much simpler. We plan on making Sweet.js independent from jQuery and Underscore.js, and also renaming it so that it doesn't get confused with Mozilla's Sweet.js.

Database

Forestpin Enterprise used a custom in-memory data store built ground up at Forestpin from the early days. The data store was a core part of the Forestpin product, and we gained a lot of performance by doing most of the calculations within the database itself.

jQuery Mobile

nearby.lk decided to drop jQuery mobile after an year with it. Adopting jQuery mobile was a bad decision. It was very heavy4 and was not developed to be used for apps like nearby.lk. jQuery mobile is super easy to be implemented for a web site with server generated HTML pages, but ours had a lot of dynamically generated pages and we spent quite sometime getting jQuery mobile to work.

Google App Engine

nearby.lk was hosted on Google App engine for an year and a half and we moved to Amazon5.

d3.js

At Forestpin, we use d3 for all our visualizations and some tables - whenever data is connected to the DOM. We also introduced CoffeeScript helpers to simplify D3.js DOM manipulation code.

We came across some requirements which were hard to tackle with D3.js. One was adding DOM elements progressively. For example, when you are drawing a matrix with a large number of small rectangles. if you draw all if it at once, the user will see nothing for a while and everything will appear at once. But it would have been more user friendly, if elements were added progressively. Then the user will see sets of rectangles appearing in short intervals, as if it was an intended animation. The total time for all rectangles to appear might be slightly longer, but the user will feel otherwise. We couldn't find a neat way to do this with D3.js.

Another problem was that we couldn't move a particular DOM element across parent elements easily. For instance, if you want to show controls like the action links beneath the focused tweet in the twitter timeline, you should either have hidden action links on each tweet. or redraw action links when the focus changes, or have the action links on a different layer and move it. The first two options are not efficient and the third is tricky. The easiest is to have a DOM element removed and inserted to the focused tweet when the focus changes. This option, although probably not as efficient as the third option, is simpler and faster than the first two6.

There were a few similar issues so we thought of going for native Javascript code for DOM manipulation, and wrote a small library with an interface similar to CoffeeScript helpers for d3.js. We will continue to use D3.js for scales, csv parsing, etc.

jQuery

jQuery is used for selectors, events and Ajax. The dependency on jQuery is becoming less and less. Using pure Javascript is not that complicated and it is much faster7.

Although we've been moving away from a lot of libraries and tools, there is still a number of libraries we use. We use them because they make the development process easier, but only as long as they don't constraint us from building what we want to build.

///People who are really serious about software should at least write their own libraries We have been using a lot of tools and libraries in our software, and have replaced a number of them with our code. Libraries makes it easy to get things done, and to ship early. But from my experience, having a third-party library or a tool dominate a core part of your software is not a good idea. +++ People who are really serious about software should make their own hardware. **- Alan Kay** We have moved away from a number of libraries (and frameworks and platforms) over the last couple of years. This may sound like a lot of hate, but it is not so. We still love those libraries and use them on a lot of smaller projects. But when your product grows and you want to mold it the way you want, sometimes libraries stand in your way. Some of the decisions we made could be wrong because we didn't understand the library properly. But we spent a lot of time trying to stick to those libraries before replacing them. Writing your code instead of using libraries takes a lot of weight off the product. Most of these libraries are written by really good programmers to be used in a wide range of scenarios, and there is a pretty good chance that you won't need them all. So getting inspiration from them and writing your own stuff will make the software lighter while making things work the way you want. A lot of programmers are likely to write their own code to replace libraries at some point, and that is probably why there are a number of libraries doing almost exact same thing in slightly different ways. One of the traps you can fall into when writing our own code is that you might just end up reinventing the wheel. There is a chance that you replace a well written library with a small piece of code initially, but with time end up improving your code to do exactly the same thing the library you threw away did. ###jqPlot >>> <<http://www.jqplot.com/(jqPlot)>> Our first release of <<http://www.forestpin.com(Forestpin)>> used jqPlot for most of our charts. Some not-so-ordinary visualizations were made with <<http://mbostock.github.io/protovis/(protovis)>>^^1^^. jqPlot helped us quickly develop a product (MVP) to show potential customers, but introduced a lot of constraints. We made some changes to jqPlot to customize some of the charts, but it wasn't enough. The next version of Forestpin used <<http://d3js.org/(d3.js)>> for all the visualizations^^2^^, which gave us more control. We never used jqPlot any project, thereafter. >>> ^^1^^ Protovis was the predecessor of <<http://d3js.org/(d3.js)>> ^^2^^ <<forestpin_2014.html(Forestpin Redesign)>> ###Backbone.js >>> <<http://backbonejs.org/(Backbone.js)>> Backbone was used at <<http://www.forestpin.com(Forestpin)>> as well as at <<http://www.nearby.lk/(nearby.lk)>>. What triggered us to code a replacement for Backbone was that it didn't save states in HTML5 history^^3^^. We weren't using most of the features of Backbone too, so the replacement, <<https://github.com/vpj/sweet(Sweet.js)>>, was much simpler. We plan on making Sweet.js independent from jQuery and Underscore.js, and also renaming it so that it doesn't get confused with <<http://sweetjs.org/(Mozilla's Sweet.js)>>. >>> ^^3^^ <<http://chrisawren.com/posts/Implementing-microstates-in-Backbone-js(Implementing microstates in Backbone.js)>> ###Database Forestpin Enterprise used a custom in-memory data store built ground up at Forestpin from the early days. The data store was a core part of the Forestpin product, and we gained a lot of performance by doing most of the calculations within the database itself. ###jQuery Mobile >>> <<http://jquerymobile.com/(jQuery Mobile)>> <<http://www.nearby.lk/(nearby.lk)>> decided to drop jQuery mobile after an year with it. Adopting jQuery mobile was a bad decision. It was very heavy^^4^^ and was not developed to be used for apps like nearby.lk. jQuery mobile is super easy to be implemented for a web site with server generated HTML pages, but ours had a lot of dynamically generated pages and we spent quite sometime getting jQuery mobile to work. >>> ^^4^^ Newer versions on jQuery Mobile seem to be lighter ###Google App Engine >>> <<https://developers.google.com/appengine/(Google App Engine])>> <<http://www.nearby.lk/(nearby.lk)>> was hosted on Google App engine for an year and a half and we moved to Amazon^^5^^. >>> ^^5^^ <<app_engine.html(Moving from Google App Engine to Amazon EC2)>> ###d3.js >>> <<http://d3js.org/(d3.js)>> At Forestpin, we use d3 for all our visualizations and some tables - whenever data is connected to the DOM. We also introduced <<d3_helpers.html(CoffeeScript helpers)>> to simplify D3.js DOM manipulation code. We came across some requirements which were hard to tackle with D3.js. One was adding DOM elements progressively. For example, when you are drawing a matrix with a large number of small rectangles. if you draw all if it at once, the user will see nothing for a while and everything will appear at once. But it would have been more user friendly, if elements were added progressively. Then the user will see sets of rectangles appearing in short intervals, as if it was an intended animation. The total time for all rectangles to appear might be slightly longer, but the user will feel otherwise. We couldn't find a neat way to do this with D3.js. Another problem was that we couldn't move a particular DOM element across parent elements easily. For instance, if you want to show controls like the action links beneath the focused tweet in the twitter timeline, you should either have hidden action links on each tweet. or redraw action links when the focus changes, or have the action links on a different layer and move it. The first two options are not efficient and the third is tricky. The easiest is to have a DOM element removed and inserted to the focused tweet when the focus changes. This option, although probably not as efficient as the third option, is simpler and faster than the first two^^6^^. >>> ^^6^^ <<http://jsperf.com/moving-children/2(jsPerf)>> There were a few similar issues so we thought of going for native Javascript code for DOM manipulation, and wrote a small library with an interface similar to <<d3_helpers.html(CoffeeScript helpers for d3.js)>>. We will continue to use D3.js for scales, csv parsing, etc. ###jQuery jQuery is used for selectors, events and Ajax. The dependency on jQuery is becoming less and less. Using pure Javascript is not that complicated and it is much faster^^7^^. >>> <<http://vanilla-js.com/(VanillaJS)>> Although we've been moving away from a lot of libraries and tools, there is still a number of libraries we use. We use them because they make the development process easier, but only as long as they don't constraint us from building what we want to build.
prev next