Building Static Sites with Node.js and Wintersmith

When I’ve spoken on the topic or even when I posted my recent guide to getting started with Jekyll, the question I always get is if there is a comparable static site generator to Jekyll that is built with JavaScript and available on npm. The reason people cite is that they aren’t comfortable with Ruby and thus have trouble when they encounter problems with Jekyll or are unable to customize it completely to their needs. Well, there’s good and bad news. First, the bad news… I have found nothing comparable to Jekyll in terms of overall features, documentation and community. Now I don’t know every engine out there, but, so far, there’s nothing that even comes close to fully matching Jekyll. The good news, however, is that I found Wintersmith to be a viable Jekyll replacement. It has a lot of the key features and is extensible. Plus, there are a reasonable amount of extensions out there for it already. On the other hand, the documentation is awful (let’s be honest) and the community is small. So, if you run into a problem, you’re stuck reviewing the source code. On the upside, I found the source code is pretty self-explanatory when I needed to rely on it. Given the lack of a good getting started guide in the Wintersmith documentation, I wrote a two-part series for Sitepoint that walks you through the entire process of building a site. It follows the same exact format of my Jekyll guide covering everything from installation to templating, creating posts, custom metadata and custom data. Getting Started with Wintersmith: A Node.js-based Static Site Generator - Part 1 Creating Posts, Custom Metadata, and Data in Wintersmith - Part 2 The source code for the example is part of my Static Site Samples GitHub project which also includes the aforementioned Jekyll sample as well as examples for Harp and Middleman.

Posted by on 8 April 2015 | 4:00 am

A Guide to Building Static Sites with Jekyll

As I’ve posted about recently, I’ve been speaking a lot about comparing static site engines. There are a ton of options out there (389 as of today, according to this site). However, based on my personal experience, I always recommend Jekyll - granted, that’s based on having used 5 of the 389 so far. I already have a GitHub project where I have built the same project multiple times with different engines as a means of comparison. The readme will guide you with installing and running each of these should you choose. Now, if you wanted to learn how to use Jekyll using this sample, I have written a detailed guide to getting started with Jekyll for the Telerik Developer Network. It walks through the most common things you need to know about Liquid templating as well as how to create and build your Jekyll site. I hope you enjoy it! P.S. One common questions I get whenever I recommend Jekyll is from people who don’t know Ruby and would prefer a solution written in JavaScript/Node.js and, preferably, available via npm. The GitHub project includes two, Harp and Wintersmith. I have a follow up article that should be published any day now that walks through the same steps laid out in the Jekyll tutorial, but for Wintersmith. I’ll post about that as soon as it is out.

Posted by on 24 March 2015 | 4:00 am

Can Web Audio be Useful?

Next month, I will be presenting at the Fluent Conference in San Francisco on the topic of “Practical Web Audio.” The idea here is that most every demo or presentation about web audio (including my own) have been fun and cool but not practical unless you build games or music software. So, are there useful purposes for web audio in a standard web app? I wrote an article called Adding Audio to Web Apps that begins to explore some ideas. In almost every case, the demo uses very brief portions of audio to try to add context to some form of input. Notifications seemed obvious - but, admittedly, others are harder to make a strong case for except in very specific circumstances. These were just some of my initial ideas - I have a few others and some variations to the ones I showed already that I am working on. Have you used web audio in your web app for something useful? If so, please share (and maybe I can even show it at Fluent).

Posted by on 14 March 2015 | 4:00 am

Comparing Static Site Engines

On February 18, I had the pleasure of gaving a talk to the San Francisco HTML5 User Group. The topic was static site engines, covering the basics of what they are and what they are good for (or what they are not good for). The latter half of the session focused on comparing three popular static site engines: Jekyll, Middleman and Harp. You can view the presentation below (I am also giving an updated version of this presentation at DevNexus in Atlanta later this month). Sample Application In order to compare the engines, I created a simple sample site, using data from the Adventure Time Wiki. The site is intentionally simple, but uses things like custom post attributes, custom global attributes, data and, of course, posts. You can get the samples as well as the slide deck on GitHub. I am hoping to add additional samples in the coming weeks. Recording

Posted by on 4 March 2015 | 3:00 am

Patterns of Development

Patterns are something that you cannot view close up - a narrow view obscures the pattern. However, given distance and time, we can begin to make out the sequences that repeat. This is one of the few benefits of being old, which I am compared to many developers. In this post, I am not talking about development patterns as in software design patterns (or anti-patterns), but rather patterns in attitudes and behavior among developers that change the way a large number of us approach our work. Front to Back to Front to… I want to discuss a pattern I have noticed throughout my career and has only become more obvious over time. This pattern is the constant shifting in focus from front-end heavy applications to back-end heavy applications. I’m bringing the topic up becuase I believe we’re seeing a shift again, which becomes clear in recent controversies over AngularJS. Basically the pattern is that every 5 to 10 years (or so), developers seem to shift in attitude and opinion on where to place much of the burden of the application - moving from placing much of the application and business logic on the front-end, to placing it on the back-end. I think a little history might make this pattern seem clear (though you can feel free to disagree with me). Back in My Day… As an old person, let me give a very oversimplified history. So, I’m not that old - really I am not. But even when I went to college it was still fairly common for many applications, especially in the enterprise, to simply be a lightweight terminal-type interface to a mainframe that had the power to run the actual application logic. In this case, obviously, the “front-end” application was little more than a text entry interface for entering commands into and receiving data back from the mainframe (which was effectively the server in this case). Rich Client Server Desktops Applications As PC’s became more powerful, development tools (PowerBuilder was a popular one) helped developers create more complex and interactive native applications for the desktop. While the back-end might retain a good chunk of logic, the interface itself contained a lot of interactivity and the flow of the application was based upon certain business rules. Basically, many aspects that previously had to be kept on the back-end could now be pushed to the front. Early Web Applications The rise of the web changed this. Why? Well, the early web was slow and limiting in many ways. It was powerful in the sense that I no longer needed to deploy applications across multiple desktops, make sure they were updated and so on. It allowed us to interact with our applications from anywhere. However, the capabilities of the browser meant that our applications were closer to the dumb terminals of earlier days than the rich applications on the desktop. Sure, they may have looked pretty with forms and tables and blinking text, but they were mostly dumb - the application logic residing almost entirely on the server. Flash, Flex, Silverlight… This didn’t change because browsers improved - at least not initially. Plugins like Flash built upon the initial attempts (like Java Applets) to give browsers the ability to create rich, desktop-like experiences on the web. Soon Flex and Silverlight were hot and much of the application and even business logic was moving back onto the client. Sure, we had to build portions of our business logic and validation twice, but the experience for the user was much improved. We called these Rich Internet Applications in part to recognize them as an attempt to recreate the interactivity of desktop applications but served in the browser. HTML5 This focus on the front-end didn’t change with the death of Flash and the growth of HTML5. Actually, if anything, it increased. The primary difference was that now we were writing complex, desktop-like applications in JavaScript rather than ActionScript. In fact, many back-ends became so lightweight that, in many cases, applications would connect directly to data on the cloud or in NoSQL databases. A Shift (imo) I actually wondered if we might be hitting a point where this pattern would break, but a couple of things changed as I see it. First, there’s the rise of JavaScript on the back-end using things like Node.js (or io.js if you now prefer). This isn’t because JavaScript is so awesome, but because it allowed us to write business logic and validation and such in one language and it could run on both ends - meaning, for example, I don’t have to write data validation in JavaScript on the front-end and Java (or PHP or whatever) on the back. Also, these servers are fast and built for running the types of web applications people seem to be building today. However, that isn’t the big reason. The major shift is because of mobile (of course!). As you may have seen in some of the recent AngularJS debates (among other related topics), placing too much burden for processing and logic on the front-end can make an application run poorly on mobile. And nevermind writing things like writing validation rules twice, we certainly don’t want to have to write our applications once for the desktop and then for each mobile platform. Thus, we need to move some of the code that we’d perhaps become accustomed to placing on the client back onto our back-end server…and so goes the cycle. Now, perhaps this trend will cease. Mobile devices are rapidly becoming very powerful computers in their own right. Or perhaps I am old and my mind is causing me to see patterns that don’t exist. But given the back and forth on this that I have witnessed over my career, I suspect we may be in the midst of yet another shift.

Posted by on 21 January 2015 | 3:00 am

The Content Model of the Web is Broken

Print is dead. This is one of those supposed truisms we’re all lead to believe. It may or may not actually be true, but if print isn’t dead, it’s not healthy. This is especially true when it comes to news and information. Magazines and newspapers are failing all over the place. However, what you may not realize is that this same type of information is dying on the web as well. Sites are disappearing and the ones that aren’t, in large part, don’t make money off their content. Basically, as of right now, the content model of the web is thoroughly broken, and you are paying the price. In this post, I’ll speak mostly about sites that focus on content around technology and development, but I think much of this could apply to most any topic area. Keep in mind this is, obviously, all just my personal opinion and some of the information is based on speculation about certain business models. The Symptoms Some popular developer and technology sites have begun to close such as Dr. Dobb’s and The H, to name a couple of recent ones. While these may not seem like horrible losses, I think they are essentially the canary in the coal mine. In fact, other sites you may care about more, such as MacWorld, InfoQ (run by C4Media) and SD Times (run by BZ Media) have also had to adjust their online businesses to adjust to changing times. These are just a few examples, and by no means a comprehensive list. What changed? There’s simply no money in advertising for content sites. Ad networks, most notably Google, don’t pay what they once did. Direct advertising dollars are now very hard to come by. I used to run a site called Flippin’ Awesome (now called Modern Web), and I can tell you first-hand that the amount of money available for ads was paltry - especially when placed against the amount of space they occupy (and the nuisance they are). Sure, I only got a couple hundred thousand page views a month, but that netted me generally less than $300/mo. Once you factor in the costs of hosting/running the site, plus my time, this was not a profitable venture by any means. The Three Types of Surviving Sites Of course, many sites still survive, but mostly because of trade offs that you may or may not be aware of. 1. No Longer Really About Written Content Many sites survive because their content is really simply a means of promoting their actual revenue generating side of the business. For example, sites like Smashing Magazine, A List Apart and InfoQ (to name a few off the top of my head), primarily serve as promotional vehicles for their conferences, which make money (I know no financial details - I’m purely speculating). While this doesn’t mean that their content isn’t still for the most part very good, it does mean that it isn’t the driving force of their business. This can mean they cut down on publishing, cut down on paying writers, cut down on other things (editorial, technical editing) or some combination of all three. In the end, however, if you’re site is just promotion expense for some other part of the business, cutting either quality or quantity helps keep the costs manageable. 2. Corporate Sponsored I have some experience in this area as I used to run part of the Adobe Developer Connection (ADC) and now run the Telerik Developer Network. This type of site is similar to the first in that its primary purpose is actually as a promotional vehicle for something else - in this case, to sell you the products and/or services of the company that runs it. This also doesn’t mean the content is bad. In fact, these types of sites often have the budget to properly compensate authors (or compensate them at all). Paying authors makes it easier to get good authors. Nonetheless, these sites have a clear cut agenda which is to benefit the company footing the bill (that makes sense - it’s not a criticism). The content is only as independent as the company behind it chooses (I’ve been lucky in this sense to have always had freedom). In addition, the site is at the whims of the company - thus, one day the ADC was growing and healthy, the next day it was effectively dead. Running these sorts of sites is not cheap and its impact on sales is tenuous to track (I think they are important, but it can be hard to find a direct link to actual sales). 3. They Rely on Free Content This type of site doesn’t always even exist to be profitable would include most blogs but also sites like my old site (at least when I ran it) and even sites like Huffington Post, who often rely on free or reprinted content, especially outside their main areas. The problem with free content is that it isn’t always the best quality. I spent a lot of time working on articles and with authors to make sure the content on Flippin’ Awesome was worthy of printing. I also had to reject countless submissions, many of which were the article equivalent of spam. Many sites don’t have the time, money or interest to do this. Other sites, like most blogs, rely on the generosity of their authors. However, this means that a) their’s usually no one reviewing what they wite and b) it’s rarely a top priority, and often die. We’ve Thoroughly Devalued Content and Writers I hate to say it, but the reason behind all this is that we, the consumers, want our content to be free. Not only that, we also think it should be free of obnoxious ads. This means that, even the sites of types 1 and 2 above, pay authors crap. Let’s be honest, no author is making a living on writing articles at $200-300 a pop (which is more or less the going rate). Sure, it’s better than nothing, but other motivations have to come into play. Personally, I worry that this model is only going to further deteriorate. If you want to make money on content given the money there is on advertising, you either have to produce gobs of content (and raise your page views based on volume) or you use link-bait (and raise your page views based on trickery). Often, it is a combination of both. In either case, the quality is what suffers and you pay the price. So the next time you do a Google search and the top 10 articles are filled with link-bait junk written by content farms, just think that this may only get worse.

Posted by on 16 January 2015 | 3:00 am

Best Music of 2014

One of my New Year’s resolutions was to write and blog more - and that doesn’t mean just on technical topics but also on topics I am generally interested in. I just need to get in the habit of writing more, generally speaking. Anyway, any of you who know me know that I am a big fan of music. In fact, until about mid-2013 I was doing a bi-weekly Internet radio show (called Vitamin Sweet) focused only on new music. If you’d listened to that show you might Charli XCX in Februrary of 2013, Foxes in May 2013 and Lorde in July 2013, among others who later became famous. Along those lines, before 2015 got too far along, I wanted to share my favorite music from 2014. Magic by Paperwhite I have a habit of following bands I like on Facebook, and it has often led to fun discoveries. In this case, one of the two members of Paperwhite is also in a band named Savoir Adore, which is how I learned of them. “Magic” is just an EP but there really isn’t a bad song on the entire album. If you love catchy, 80’s infused pop, this is a total winner, with the standout song being “Take Me Back.” Outstanding stuff. Haerts by Haerts “Wings” by Haerts was probably one of my favorite songs of 2013. It was released as a single, all by itself (as in, no B-sides or even remixes). The band slowly released new music, including their 4 song Hemiplegia EP in 2013, but 2014 finally got a full album. Sure, if you had the EP, all those songs are back again, but there’s plenty more here and every one is a winner. “Wings” is still the best song on the album, though “Call My Name” is my favorite of the songs that had not previously appeared. Siren by Young Summer I don’t even recall how I heard of Young Summer. I tend to go searching for new music here and there and have been known to download albums that I proceed to forget about without even a listen. Young Summer nearly fell into this trap for me. I was browsing my iPod and could not even recall downloading the album - boy would I have missed out. Young Summer is electronic alt-pop (if that’s a thing) that is insanely catchy - many mornings I have woken up with one of the songs in my head. My favorite is “Blood Love,” but will admit that the one that sticks in my head most often is “Taken.” If this woman doesn’t become a star soon, I will be very surprised. There for U by Astronomyy If you are a fan of chill out, “date night” music along the lines of Rhye (or Milosh), then you will love Astronomyy. As of yet, there’s no full album available, but the “There For U” EP features 4 superb songs. He also recently released a new single called “Not Into U.” Hopefully a full album is on the way. Nonetheless, I highly recommend getting everything he’s made available so far. You won’t regret it. My personal favorite is “Pack of Wolves.” Days of Abandon by The Pains of Being Pure at Heart I fell in love with The Pains of Being Pure at Heart way back in 2009 when their debut album was my favorite album of that year. Much of the composition of the band apparently changed between 20011’s “Belong” album and this one (basically, only the band’s founder remained the same from what I recall). You could expect this to mean that the music might suffer, but “Days of Abandon” sounds like what you’d expect from the band - and that’s a great thing in my opinion. If anything, they got more accessible. You may actually recognize “Simple and Sure” (which is one of the standouts) as it was featured in some commercial, but my favorite is “Beautiful You” (though, unfortunately, this isn’t available in their SoundCloud stream). PocketKnife by Mr. Little Jeans I love Mr. Little Jeans so much that my wife (who loves her too) and I bought tickets to go see Lilly Allen just because Mr. Little Jeans was the opener (not that their is anything wrong with Lilly Allen, but we very rarely go to concerts). I’d paid too much for upper floor, standing room only tickets where we had to stand with our backs against the wall the entire time to allow people to cross in front. Somehow though, Mr. Little Jeans wasn’t there (perhaps a mistake on the SongKick listing, though I did end up discovering Samsaya, who is not bad). Needless to say, we were willing to put up with a lot just because Mr. Little Jeans is great and this album is fantastic. If you’ve heard a song from it, it is “Oh Sailor,” which is a fabulous song, but my personal favorite is “Mercy” (which, unfortunately, is not embeddable on her SoundCloud stream). Voices by Phantogram “Voices” is the fifth album, EP or single that I’ve bought from Phantogram since “Eyelid Moves” in 2010. I liked every single one, but rarely loved them. Voices changed that by being much more consistently good and much more accessible (in my opinion) than their prior releases. It’s a great album and “Nothing But Trouble” is an especially great song. If you’ve enjoyed Phantogram before or even if you haven’t, you should give this album a try.

Posted by on 9 January 2015 | 3:00 am

Dealing with an Unhappy Community

Most of us deal with the “community” in our jobs on some level or another. Perhaps we are an engineer on a product that has a community of users, or work for a company that has a community of customers, or, perhaps, are in a position to be part of the public face of a company, product or service who is tasked with communicating with the community as part of your job duties. Recently, I wrote an article about a bit of a dust up in the AngularJS community about the plans for Angular 2.0 and it got me thinking about how we deal with the community - specifically when there is a widespread community backlash. We’re Not Talking About Trolls There’s a big difference between trolls, whose complaints are almost always singular in nature (and agressive in tone). In my experience, trolls complain alone and specifically target an issue that is very specific to them. In this case, I’m talking about a decision that caused widespread unhappiness in your customer base or users and is being expressed, sometimes angrily but rarely aggressively, by a large swath of your community. I Have Some Experience in This I’ve been on both sides of community backlashes. Professionally, I was the Flash and Flex community manager at Adobe around the time of the infamous “Thoughts on Flash” essay by Steve Jobs. At the time and understandably, any actions Adobe took around Flash and Flex were heavily scrutinized. I was still a public face for the products (even if I had technically just been reorged) around the time that Adobe decided to end of life…errr…open source Flex. As you can imagine, I caught a fair share of flak. Nonetheless, I think it taught me a lot of great lessons that I’ve carried into my positions since, both for and after Adobe. The Different Kinds of Decisions that Cause Backlash In my experience there are three core types of decisions (by a product team, company, open source project, etc.) that can often lead to a backlash. The first two are relatively easy to handle. Necessary Decisions Let’s face it - sometimes there are decisions that you, your company, or your product team may make that simply have to be made, but will, nonetheless, make your community unhappy. Sometimes, you, personally, don’t even like the decision. However, there’s a big difference betwen liking a decision and understanding and supporting why it was made. To me this is easy to handle as your options are very limited. You can and should be understanding of your communities right to be upset about the decision. You can communicate why the decision was necessary and, if applicable, explain that you aren’t happy either, even if you support its necessity. The last thing you can do in this case is simply develop a thick skin. You’re gonna get some lumps and that’s ok. Keep in mind that these things pass and often seem much worse in the moment than they do in retrospect. Poorly Communicated Decisions Sometimes it isn’t really what changed that upset your community but how you, your company or others communicated that decision to them. In my experience, sometimes companies spend so much time wordsmithing their communications that they remove all humanity from them - they sound like they are coming from a committee who is more concerned with protecting the company than with the impact of decisions on their customers. This (and other things) can lead to communicating a the impact of a change poorly. In this case, I also think the response is simple, if not easy, which is to clear up the miscommunication (and soften any hard feelings it may have caused). For instance, you could draft a clear and personal message expressing concern for the misunderstanding and clarifying the impact. Make this the opposite of the company-type PR response - make sure they understand that you are a part of that community and care personally about it. Of course, if you can smooth some hurt feelings with free stuff of some sort, that always helps too. “Best Intention” Decisions What the heck do I mean by this? Well, this is kind of where AngularJS was. Sometimes decisions are made that impact our customers, users, or whatever that are made with the best intentions, but, in the end, we misread the needs of our customers/users/etc. It wasn’t that we miscommunicated. They understood - they just didn’t like what they heard. You may think these would actually be the easiest decisions to deal with - simply walk back the choice that made them unhappy. However, assuming this is the best response isn’t correct. Sometimes, as it turns out, the decision you made is the right decision for the future of the product, service, company, etc. This can get lost in the fog of loud and unhappy users. However, if you let the dust settle, it turns out that it wasn’t as big a deal as it sounded and the “mob” seemed much larger because it was loud. Sometimes, the decision was in the right direction, but it’s now up to you to translate your customer anger into adjustments to the choice (this appears to be what AngularJS is doing by the way). You were headed down the right path, you just went a little too far or veered slightly off course. Now you just need to take a few steps back or to the right/left, and your community will be happy again. Sometimes, the decision was a poor one made with the best of intentions and should, in fact, be walked back entirely. The point is, there are many options to choose from in this case, and knowing which is the right one isn’t always easy when you are receiving a barrage of unhappy tweets, blog posts, comments and more coming your way. Whatever You Do, Don’t Assume Your Community Is Wrong Here’s the thing to recognize - in none of these cases is the community wrong. Remember, these aren’t trolls - they are community members we care about with a legitimate complaint. In case 1, they have every right to be unhappy even if there’s little we can do about it. In case 2, we communicated poorly and this led to them being unhappy. In case 3, we probably just need to adjust a little - even if the reaction may have seemed overly harsh. The single biggest mistake you can make in any of these cases is thinking you are somehow better or more enlightened than your commmunity and plowing ahead with your decisions accordingly. I’d love to hear thoughts.

Posted by on 3 November 2014 | 3:00 am

Boston Festival of Indie Games 2014

This past weekend I had the pleasure of attending the latest Boston Festival of Indie Games at MIT. This was the third year of the event and my second attending. This year, as last, I attended with my two boys (ages 8 and 12). Here are some thoughts on the event and some of the games that I, personally, found interesting. The Venue This year, as last, the event was held in the athletic center at MIT. However, the event outgrew the downstairs room that it was held in last year and actually took up both floors - the table top games were downstairs and the digital games upstairs (sessions were elsewhere but I didn’t attend any). While I am happy that the event has grown (and plenty happy to pay the small $10 fee to attend), it did come with a major drawback: the upstairs room was hot and humid - uncomfortably so. This even on a day when Boston itself was unusually cool for mid-September. Honestly, we might have spent more time there if it weren’t for the unpleasant conditions. My Favorite Games Here are my favorites from what I was able to try at the showcase. Bōru mo This was probably my favorite game of the day. It’s a fun 2 to 4 player game where the point of the game is to jump on the other players until they run out of lives and, hopefully, you are the last one standing. The way you do this is that your little creature turns into a ball when he jumps and can smash the opponent. It sounds simple but is harder than you’d think. The multi-level platforms and a variety of power-ups adds some extra fun to the game, which, as I understand it, will be PC only for the moment with other platforms hopefully soon. One of the best aspects of this game is the design. It’s colorful and cute, which somehow adds to the sense of fun. This shouldn’t come as a huge surprise given that the developer is a graphic designer during his day job and this was built over nights and weekends. Shock Jocks This is a clever iPad game from @BigMikeTheDev. It’s basically a two-player “air hockey” with a bit of a twist. Your paddle is electric and needs to maintain a charge. Every time you bounce the ball back, depending on a number of factors, it eats into your charge. The strategy is keeping your paddles charged (by placing your fingers along the sides of the game board) but not missing the shot. There’s no site just yet, so follow @BigMikeTheDev for updates. Swimsanity A Kickstarter project being developed by Decoy Games, Swimsantity is a 4 player underwater brawler with a variety of game modes. Each player comes with different weapons and different abilities, the later of which charge up as you compete. In one game mode, it was simply a Super Smash Brothers type brawl with the winner being the one who gets knocked out the least. In the latter, it was a cooperative mode where you needed to help each other to get through the level without being caught by the creature following your team. The artwork and controls were all nicely done and my boys and I enjoyed playing together. World Zombination World Zombination Teaser Trailer from Proletariat Inc on Vimeo. Yes, the zombie thing is getting old, as made clear by the number of zombie games even at this festival, but this one from Proletariat stood out. There are two modes. In one you control the heroes trying to defend cities against the hordes of zombies. This mode works much like a tower defense game. In the other (and the mode I got to play), you are the zombie hordes trying to destroy the city. In this mode, you use different “mutations” to give members of your zombie horde different powers. Some powers work better than others to defeat certain heroes (for instance, a stealth zombie is good against the snipers). While both modes are takes on existing games, the combination of the two modes, along with nice artwork and easy to understand gameplay, made this one a winner in my mind.

Posted by on 16 September 2014 | 4:00 am

Running Great Technical Conferences

As some readers may know, for five years I ran a small (about 350 people) conference here in Boston. Originally called Flex Camp Boston (and obviously focused on Flex), it was renamed RIA Unleashed and subsequently Web Unleashed. The event still exists, run by FITC and occurring this September in Toronto. I loved running this event, and generally enjoy running events overall. This is why I was interested in reading How To Plan And Run A Great Conference Experience by Zach Inglis. It’s a very good article and you should definitely read it if you have interest in the topic. However, I had several points I thought needed adding and one I disagreed with. Much of this was mentioned in my comment there but not I’ve edited it and amended it a bit. On paying speakers - One complication I came across (at least for US based conferences) was paying international speakers. There may be legal/tax laws that complicate paying international speakers (especially over certain dollar amounts), and this may vary from country to country, depending on where you are running your event. I don’t know the specifics but it’s worth noting. Promotion - In my experience, getting the word out was the hardest part of running any event. It’s common for first time organizers to think they can rely on high profile speakers to get out and promote your event, but they are usually mistaken (nothing against the speakers as I’ve been one many times). Promoting the event required a lot of research on things like relevant user groups or mailing lists and the use of targeted discounts (I used codes) to figure out which avenues worked best and double-down on them. WiFi, WiFi, WiFi - It’s so hard to get this right (because you are often at the mercy of the venue or vendors) but so important for a technical conference. Complaining about WiFi at tech conferences is like complaining about the weather, everyone does it from time to time. It never seems to be perfect, but pay close attention to this as it can really ruin the experience if your WiFi is unusable by a large portion of your audience. Expect to lose money? - The author states that you should and it is the one area I disagree with. I nailed down sponsors to cover the guaranteed costs from day one - before the conference was even announced. I learned quickly (an;d through trial and error) where you can cut costs and minimize risk. Things like find out what the minimum guarantees are to secure your preferred venue and only guarantee that (they can always up food or other expenses but they will never lower them once you sign); and plan on a range of 10-20% no shows (regardless of how much you charge - for free events this is much higher) and plan food and swag accordingly (you’ll learn your percentage range over time and this can be useful even to carefully oversell seats, which can really improve profitability). The point is, with careful planning, this doesn’t have to be a money loser. I always had the expectation that I would not make money but I also was not going to lose money (not including time spent, obviously). Organizers who lose money are less likely to run the event again, which, in and of itself, does a disservice to their attendees. Be prepared to be scared - My first year I sold out quickly. However, this is not necessarily the norm (and wasn’t in subsequent years). In fact, a majority of tickets will be sold during the last 2-3 weeks. A month before, you may be scared out of your mind that the event will be a failure only to find that you sold out or hit your targets in the end. This became a pattern every year after the first and is something I have confirmed with many conference organizers. However, you cannot be reactive, relying on last minute promotions to salvage things. Start any big promotional push early enough that it can have an impact in those final two weeks because, most likely, any promotion you start in the last two weeks doesn’t have enough time to get the traction it needs to succeed. (A side note on this topic, it seems that many people make the decision to attend early but wait until the last minute to make purchases - I’ve done this many times myself, and can explain why last minute promotions don’t have a great impact). If you’ve run an event and have tips and opinions to share, I encourage you to comment on the article.

Posted by on 16 August 2014 | 4:00 am