Webtrends API authentication with Node.js

As a dev apprentice with the NYC Mayor’s Office digital products team (I guess I haven’t updated in a little a while), I am currently working on building an Analytics dashboard that draws from Google Analytics and Webtrends, an older analytics solution. Both have APIs that allow you to request relevant metrics. The Google Analytics API took me some time to set up, but my previous work on oAuth with Hackterms really helped wrap my head around it. Webtrends, however, was a different story. Their documentation is… not great… at all., and merely states that Webtrends uses basic authentication – which, as I’ve learned, isn’t simple authentication, but an actual, rather-outdated type of authenticating users.

Webtrends comes with a neat Generator tool that allows you to select the data you need, and then gives you a corresponding GET query to use. However, you are meant to plug this query into a browser, which then presents you with w a sign-in module:

Screen Shot 2018-07-17 at 1.55.29 PM

Of course, that’s not usable for API requests, which should be performed automatically from your back-end. So, the account info needs to be passed in via the headers. I started searching for how to construct a basic authentication header for Webtrends, but couldn’t find any up-to-date info, let alone in JS. So, here is a basic Webtrends connection, with Node.js and the npm request package, which simplifies sending requests.

The Webtrends username is constructed with Username and Account, separated by a backslash.

username: username\account

What nobody mentions is that the backslash needs to be escaped. This is really important!

username: username\\account

So, for example, if your username is TakeshiKovacs, your account is Envoy, and your password is Resolution653 , your combined credentials would read:

TakeshiKovacs\\Envoy:Resolution653

According to these helpful MDN docs, you construct a basic authorization header like this:

{
  Authorization: Basic YWxhZGRpbjpvcGVuc2VzYW1l
}

The YWxhZGRpbjpvcGVuc2VzYW1l bit is actually your username:password (literally, username + “:” + password), encoded in Base64. In Node.js, in order to encode in base64, we use the Buffer object, which performs all sorts on encoding and decoding operations. In order to encode a string as Base64, we use new Buffer.from( some string ).toString(‘base64’);

So, to sum up, the basic authorization header is:

{
  Authorization: Basic + new Buffer.from(username + ":" + password).toString('base64');
}

Once you have the URL from the Webtrends generator, the complete GET request for data (with the dummy credentials from above) looks like this:


var request = require('request');
var auth = "Basic " + new Buffer.from(username + ":" + password).toString('base64');
// var auth = "Basic " + new Buffer.from("TakeshiKovacs\\Envoys:Proposition613").toString('base64');

var webtrendsURL = "https://ws.webtrends.com/v3/.../"; // your webtrends URL from generator;

request.get({
    uri: webtrendsUrl,
    headers: {
        Authorization: auth
    }
 },
    function(err, response, body) {
        if (err) {
            console.log("ERROR!");
            console.log(err);
        } else {
            console.log("Got a response from Webtrends!");
            console.log(body);
        }
})

PS: you should store your credentials as environmental variables so they don’t get exposed. I used dotnev, which was straightforward and easy to implement.

Advertisements

Hackterms, Pt II

I had spent some time trying to get Hackterms off the ground, and then stalled and left the project sit for a month. I had spent days agonizing over small design decisions – lining up icons and such –  and then had a single, successful push for my first 20 users.

And then I stalled. Interestingly, this slowdown didn’t come from negative feedback or a lack of interest. I expected harsh user feedback, trolls, needing to cut/revamp features, user churn – in a word, users not caring about the site and not giving it the time of day. I was prepared for it. However, none of that really happened. A few of my users were moderately interested, and then I ran into a few pretty complex bugs – for example, the Submit Definition button wouldn’t work on iOS. Early adopters were still responding on my Slack channel. Though I always responded, I often didn’t come through. I kept promising to return to fix the bugs, and then kept stalling and kicking the ball down the road.

However, unlike some of my past projects, I never considered abandoning Hackterms. Even after about a month of inactivity, I thought about the site frequently, and – between encouragement from friends and belief that this project should be out in the world – was itching for a chance to come back to it. Stepping away from working on the problem daily gave me space to think deeply and reconsider my approach.

When I started, I attacked Hackterms enthusiastically, jumping into the fray (as I tend to) and building, designing, and planning all in one. Although I  wireframed and set up a kanban-style Trello board, prioritized features, and otherwise tried to plan ahead, the truth was, my structure was convoluted, and I had made a lot of imperfect technical decisions. For example, I was using a mix of EJS and Handlebars.js to render definitions and comments. When a Slack user asked how he could try to contribute to the code, I attempted to explain the data flow to him and realized it was anything but intuitive. To be honest, I myself was a little worried about revamping parts of the site because I couldn’t remember how I built them myself.

Sloppiness like this is the tradeoff for building quickly, but I had gotten to a point where my core features were up, and it was time to do two things:

  1. get some serious users (I arbitrarily decided on 100 unique users and a 1000 definitions as my goal). I was at 20 users thus far.
  2. revamp my code to make it readable, accessible, and ready for collaboration. Esther suggested open-sourcing the project (which was also the top-comment Reddit request), and I wanted to be able to get James’s help with my code.  I was anxious about this, but I knew that it was important to clean the code up.

I was reading the excellent Lean Product Playbook (highly recommend!) and kept thinking about how I could apply my learning on Hackterms. The book explained a lot of the topics I sort-of-intuitively half-assed in a very structured and insightful way. No matter how much it sucked to admit, I was making the fatal mistake – I was coding a massive project (telling myself it was an MVP) before getting real user feedback. My sort-of proof of concept was a massive encouraging reddit thread, and a few initial users. I was kind of following a lot of Lean principles, but not in any structured way. The book encouraged me to come up with hypotheses about who’d use the site, brainstorm a set of features, select my top features, categorize them using the Kano model, and then test them.

I had done  this, kind-of, sort-of.

  • I had come up with a value prop based on my own experience: users want to quickly identify what a coding term does and where it fits in
  • I came up with a minimal feature set: users need to be able to search for and view definitions, and also add new definitions for a given term
  • I then added with what Kano calls a “delighter”: instant search (search/display with every letter you type, just like Google!)
  • After that, I went off the deep end, and decided to build a moderation system that queues up new posts for approval. This was not necessary for an MVP at all. I was anticipating a problem that might come with scaling – trolling and irrelevant content. However, I actually spent forever building this.
  • My proof of concept was effectively marketing: lots of Redditors said “this is a great idea!” I then got a few of them to join after I built the product.
  • I conducted pretty extensive qualitative research with my closest friends. I sat next to them and watched them used

In my last job at Codecademy, one of my managers repeatedly disparaged me from relying too much on intuition in decision-making. Reading the Lean Product Playbook, along with the time that I stepped away from Hackterms reinforced just how important planning, forming hypotheses, testing them, and using metrics is. Up till this point, I’ve been (quite frankly) a bit confused about the relationship between data and intuition in product. Data seemed like the “right way” to make decisions, but we also glorify brilliant product visionaries (you know exactly where this links leads) and the idea of solving your own problems (which may not be other people’s problems).

The book really set me straight on this, coalescing a lot of the advice I’ve heard (but not understood from others). Early on in the product, you rely on intuition. I used to picture this as closing your eyes and visualizing/brainstorming brilliant ideas – but, really, you form hypotheses based on your (often unconscious) understanding of the space. That’s why solving your own problems makes sense – you don’t have to guess at others’ pain points, because you know your own. Anyway, you come up with hypotheses, and then you build some version of the product ( your MVP), and then you test it – and rely on data to tell you if the product is working (in other words, whether you’ve achieved the mythical product-market fit).

I wasn’t totally wrong – I just attributed way too much mystery and magic to “getting it right with intuition”. I learned that no matter how you come up with your product hypothesis, you need to figure out (1) exactly what you’re testing (2) what core metric represents this – Dan Olsen says that early on, this is usually user retention, and (3) who your target users are. Basically, I needed to create a framework within which I could be creative; direction to my decision-making.

I had a strong personal problem that I wanted to solve with Hackterms, and a lot of intuition around why others might need my product (that I developed spending time around people learning to code), and roughly knew that more users and definitions is good. But, but I never properly formulated my hypothesis, picked a metric to test, and put it in front of users.


Around this time, I stumbled on Courtland Allen’s interview for a YC podcast, which in turn brought me to Indie Hackers, and then IH’s interview with Pieter Levelsand one of Pieter’s own talks. This seriously kicked my ass – especially his advice on validating by launching.

I knew because I spent one year on this Tubelytics app and that didn’t work so I was like, okay I shouldn’t have spent so much time. I should have done more startup stuff just ship, and deploy, and launch, and validate, this is my idea and this was definitely new or my perspective on it was new, validate by launching. You don’t know if the app is going to work but you need to launch it, see if it’s going to work and you need to understand that most apps won’t work. I knew that, so I was like, “Okay I’ll just do a lot and I’ll see what sticks.” like throwing spaghetti at the wall and then I started making one thing a month.

I had spent nearly four months building Hackterms on and off, and I knew strongly that I needed to validate or move on. So, I finally did something I’ve been meaning to do for months: I invited my friends James and Jon to add some more seed content. We sat around a table, ate pizza, and wrote content – and talked about what good content looks like. We ended the evening at ~125 definitions. I decided it was time to share Hackterms with the world, and deal with whatever the consequences are.

If users asked for changes, I’d make them. If the website flopped, I’d move on.

I put the finishing touches on the last muse feature – a list of most searched and most requested terms, and then shared the post on Indie Hackers. I waited eagerly all day, aaaaaaand…. nobody responded. People visited the site, looked up the top searched terms, and then left. No new definitions added, no comments.

The next morning, I got up early, and posted to Indie Hackers again, this time using the “Urban Dictionary for coding terms” tagline, and also posted on r/webdev (which was technically against their subreddit rules due to timing.)

giphyIndiehackers stayed dead silent, but Reddit gave me an enthusiastic welcome back. Despite the fact that I kept waiting for my post to be removed (because you could only show off projects on Saturdays, and it was a Tuesday), I suddenly had more users on my site than I ever had before.

Things got a little wild as I watched the user numbers climb up and more user dots appear on the global map. I was getting a ton of upvotes and a ton of really great comments. Amazingly, people were signing up and adding great definitions. The number one suggestion was “add oAuth”, which I quickly learned meant one-click Google login.

Screen Shot 2018-02-13 at 11.34.29 AM

I hit the top post on r/webdev, and the site briefly went down at about ~80 simultaneous users, which left me absolutely terrified – because, frankly, I had no idea what to do. Fortunately, after I restarted the server, it seemed to continue working on its own, and I thanked my lucky stars. Someone even posted the link to Hacker News, which I was mildly terrified to do myself – they’re serious technical people over there!

Screen Shot 2018-02-13 at 10.41.04 AM

And then… my run for the day was abruptly over. My post was removed, because it wasn’t Show Off Saturday or r/webdev – as I mentioned earlier, it was Tuesday. In a way, though, this was good – it gave me a good taste of what having lots of simultaneous users is like, and I got some great feature suggestions.


The #1 request I got was to implement oAuth in order to allow users to log in with Google/Github, so I got to working on this. Integrating these was a bit of a pain, so I wrote a whole another blog post on this – but managed to get it working! Additionally, I spent more time coding with Esther,  and further tweaked my design based on her feedback. Every time I looked back at my older designs, the site felt more and more out dated – and consequently more modern with every tweak. So, I wondered how outdated the site looked now, and how I could improve it. I kept paying attention to designs I’d see on the web, and integrating little tweaks to my site. James pinged-ponged my own advice back at me – don’t try to reinvent the wheel. How do other dictionary sites handle their design? It was good advice, and I followed it.

Around this time, I was about to start two different jobs – one was a long-term contract, and the other a weekend consulting gig. I knew I wouldn’t have the luxury of working on Hackterms as intensely as I have been, so it was time to get my user #s up. I had received validation from my previous launch, but it still felt like luck – like my users were doing me a favor. A few weeks later, motivated  by an impending deadline, I decided to launch again, I knew I’d have less and less time, so it was now or never. My girlfriend and I were visiting friends in Boston to celebrate a birthday that weekend, and I finished oAuth for Github and and Google that Saturday morning. I knew I’d have little time to participate in the ensuing discussions, but I was slated to spend the next 4-5 weekends working, I didn’t have much of a choice. I decided to launch and trust the site to hold its own.

I once again posted on reddit on both r/learnprogramming and r/webdev, and… once again, the post exploded. Except, this time, it exploded much more on r/learnprogramming than on r/webdev.

Screen Shot 2018-04-02 at 12.22.01 PM

I was sitting in my friend’s living room, and I had my laptop open, trying my best not to get distracted from the joyous hangout. After a while, they became curious about the climbing numbers, and I briefly told them about the site.  We were watching my post climb the rankings, and the user numbers creep towards 50, and then 80 – where the site leveled off before. From my previous crash, I assumed that the reddit hug of death would be me eventually, and I didn’t know why or how to fix it. I was getting more terms and signups, and the post was at maybe 300 upvotes; it was an amazing feeling, and it was great to know my last launch wasn’t a fluke.

As steam picked up, we watched the number climb closer and close to 100. And then, the site crashed. Without skipping a beat, I restarted the servers. We were back up. Except, this time, I glanced at the log. It looked like the crash was cause by a regex parsing error. The site stayed up, and I went back to playing our board game. Except, ten minutes later, the site crashed again. I checked the logs – same error. This was great news. The Hug Of Death wasn’t bringing my site down – a simple code issue was. I excused myself for a few minutes and dug into the code. The issue stemmed from a small search function that converted non-alphanumeric characters into ones digestible by URLs. The simplest thing was to just comment it out – this would impact search results for terms like C++ and A*, but in return, would prevent my site from crashing. Worth it.

I pushed the fix, and held my breath as I went back to playing the board game. No more crashes.

And then, to my disbelief and my friends’ delight,Screen Shot 2018-02-24 at 6.02.05 PM.pngwe crossed 100 simultaneous users. It was a completely arbitrary (vanity) benchmark, but it still felt surreal to see. Over 100 users were on Hackterms, and it wasn’t crashing anymore. The dreaded Hug of Death wasn’t getting me, and I managed to isolate and fix the coding errors that brought me down earlier. Positive comments were pouring in. Lots of users were signing up to create new definitions.

In a flattering move, I even got trolled – which I took as a huge compliment. Users caught wind of the fact that searching things would add them to the “most searched” panel, and … rickrolled me. I got a good laugh out of it; it hurt me to take these down, but I knew I needed to keep the integrity of the content.

Screen Shot 2018-02-25 at 1.53.22 AM

Never gonna run around… and desert you!

 

Two more noteworthy things happened that day.

First, because the population of r/learnprogramming is by design made up of beginners, I got a lot of visits and user signups, but not nearly as many definitions. My user count jumped from 54 to 250+. This stumped (and encouraged) me, because I couldn’t see a reason for people to sign up other than to add definitions. However, due to this huge ratio of learners, I got an influx of searches, but only about ~300 new definitions in the next few days (which was amazing, but also a much smaller ratio than my r/webdev launch).

Second, I got a crash course on internet security. Someone spammed the site with XSS, which I knew nothing about, and users were reporting crashes. Users had injected JS through my definitions, causing all sorts of alerts to run – including one that caused an infinite loop: while(true){ alert() }

I felt totally out of my depth, but with a bit of research, quickly sanitized input and removed the definitions. In the grand scheme of things, this was an important lesson, but (thankfully) didn’t majorly impact the launch. Later, I found that users had been complaining about popups/weird terms on my /all page throughout the day, but I didn’t see it till the evening.

A few users even emailed me to help me fix several security loopholes – including a clever one where a user managed to impersonate me with invisible characters. It was a really nice thing to do.

It was fascinating to watch the change in development priorities. While building the MVP, I tried to focus on validating crucial features, asking do people need this? Now that lots of users were on the site, a very different set of priorities was becoming relevant. Where is your privacy policy? Can I delete my past submissions? Is the input sanitized/secured? Why aren’t you running an HTTPS site? Do you have a docker image I can contribute to?

All in all, I ended the weekend with ~350 definitions and lots of new users. The launch was clearly a success, and contributors – if not learners – were eager to share their knowledge.


Over the next few days, residual definitions kept coming in. I launched on Saturday, and the following Sunday to Tuesday, users continued to frequent the site, catching up on reddit threads from the weekend. A few days later, I was out to dinner with James, and was telling him about the launch. I told him that I got a bunch of Twitter mentions I didn’t know about, and was showing those to him, when an article – posted just hours before – caught my eye. Dice, a job search site, had written about Hackterms. Reading through the review, I was floored. It was positive press, and it was spot on with what the site was trying to do:

… the crowdsourced definitions are far easier to grasp than Wikipedia or general web searches. Written by developers and engineers who probably have to explain these terms regularly seems to be a winning strategy.

I was amazed and humbled and floored. I wrote the author to thank him for the article (and  not-so-subtly asked if hew knew anybody else who might want to write about the site), and I continued personally emailing each new Hackterms contributor. A few days later, we got another writeup by JAXenter, a java-focused tech publication – again, spot on about the mission.

Obviously, programmers have personal preferences about what tools and frameworks they uses. However, Hackterms makes no judgement calls as to the superiority of one system or another. They also don’t go over code snippets or examples. This is a dictionary, not a how-to explainer. The goal here is to provide definitions as clear and concise as possible.

I was elated, and took a few days to think about what happens next. 1000 terms seemed like a good short-term goal to aim for. It was clear that devs want to teach – but would Hackterms be helpful to learners? I needed to figure this out.


James suggested a great thing, which was echoed by the Dice article. He said I should tap into individual coding communities – Swift, and Haskell, and Rails, for example. My job start date got pushed back a week, so I had another Saturday to launch – and I couldn’t let it go to waste. I looked up the top programming subreddits, and realized I never launched in the biggest one: r/programming. So, a few days later, I posted on a couple more subreddits – r/programming, as well as r/iOS, a WordPress, and a Haskell subreddit. This time, I linked the Dice article, to give the site some legitimacy, and (frankly), to show off a little.

The post started picking up steam in the morning. As new definitions came in, I would routinely go into Mlab (where I was storing my mongo database) and combine/merge certain terms in order to keep the definitions consistent (for example, merging backend and back-end, or node.js  and nodeJS). I was making one of these changes, half-absentmindedly (I was in a pretty bad mood that day), when suddenly I saw my realtime user numbers fall from 80 to 25. I saved a term I was working on in Mlab, which refreshed it, and showed me a snapshot of my database. I glanced at it, and then stared at it in horror as I slowly realized…

… my term count was 0.

I opened the site, and there was no content. Thank god, my definitions seemed safe, but given how I structured my data, definitions couldn’t be pulled up without terms. I had just hit 375 terms (from ~330 at launch an hour ago), and all my terms were all gone. I refreshed the site a few times as the reality set it.

For about two minutes, I genuinely thought Hackterms was done for. For the hundreds of hours I had put into coding the features, the true value of my website was its content. Hundreds of people took the time to write thoughtful definitions, and now there was no way to access them. Never mind that my launch was an embarrassment – there was no longer any content on the site; I had let down all the users who took time to make accounts and contribute. All that work, done for.

(I later found out that I most likely caused my own error. Absentmindedly, instead of deleting one duplicate term record, I deleted all of them.)

crash

I consciously had to grab myself by the back of my shirt (figuratively, of course), and pull myself out of the downward spiral I had entered. I wasn’t going down without a fight. I was tempted to look at the reddit thread – no doubt, dozens of messages about how my site is broken were coming in, but instead, I stopped and thought.

I had no backup. But, thank god, the definitions were safe. The core of my content was fine. And, each definition had a term field. So, if I could somehow pull the terms from the definitions and create 375 new records from them, my site would be up.

I wasn’t going to go down without a fight. I knew there was no instant fix, and the launch was botched – but not over. It was only 10AM; the post would be up for a while. As tempted as I was to look at the repercussions of my crash on reddit and feel bad for myself (it was bad – I was having a bad day, and I just wanted to sulk,) I knew I needed to focus 100% of my attention on fixing this mess – and there was a glimmer of hope. I brought the site down for maintenance on Heroku.

My first thought was to try and rebuild the definitions manually, by brute force, but 375 records would take me forever to create. Plus, each mongo record has a unique ID, and I couldn’t make that up – I’d need to add each record one by one in Terminal. No, that wouldn’t work. A better way would be to cycle through each definition, and create a term based on the term field. I could test for duplicates (since some terms had multiple definitions) as I created the records.

The code would be messy, but not difficult. I knew how to do it. I sat down and began writing code, doing my best to quiet any negative thoughts swirling in my head. At the surface, the code wasn’t difficult:

  1. cycle through each definitions
  2. grab the term attribute and search the database for it
  3. if the term didn’t exists in the DB (search.length == 0), create term
  4. move on to next definition

I went into my local environment, which had ~75 fake definitions, and deleted my terms collection to recreate the error. I wrote the code quickly and with surprisingly few errors. I kept trying the simplest solutions I could think of – to cycle through my definitions, I used a for loop to read each term, and if there was no such term, create it. Mongo threw an error – the for loop was executing almost instantly, so I was creating too many records in a very short time. Simplest solution? SetTimeout. After getting a quick solution to the classic for loop/setTimeout quagmire, I ran the code locally. Amazingly, it worked. I knew I could afford to fuck up a bit – if I accidentally created duplicate terms, or even most terms, I’d be okay. I could clean it up, fill in the holes. I just needed to automate the bulk of the work. I named the trigger button “restore database”, ensured it was only accessible to me, and gave it the id of “Jesus” (to bring my data back from the dead.)

I pushed my code to production, and checked Heroku logs, then turned Heroku maintenance off; the site was live once again, with no content. Very quickly, I started getting a ton of traffic. I took a deep breath, watched the site logs like a hawk, and hit my “restore database” button.

And… nothing. I didn’t know why, but it didn’t seem to work. I ran it (erm, pressed the button) again. This time, I got two terms! I didn’t know what was happening, but guessed that my (empty) Terms collection was getting to many requests from incoming users to process my function. Google Analytics told me there were 30-40 people on the site, and I needed to shut off their access to the DB so I could work on it.

Again, I went for the simplest solution. I redirected anyone on the site to a simple white page that succinctly stated “down for maintenance”.  No DB access. Then I set my routes so that only the user named “Max” (erm, me) could bypass the page.

I pushed the code, navigated to my button, took a deep breath, and hit the Jesus button. I refreshed the Analytics page frantically to see how many terms I had. Within a minute, the number climbed to 375. My fix worked flawlessly. I tested the site, and definitions showed up just fine. I was back.

I turned off maintenance mode, and users came flooding back from Reddit. I had been down for an hour. Nobody cared; nobody knew. The users who saw Hackterms down probably left and never came back, but there were hundreds of other users behind them. I braced myself and checked the threads. It was only 11AM (NYC time) in the morning, and the SF crowd was just waking up, and the NY crowd was just getting to brunch. I knew the bulk of my users would be visiting later, when the site was running.

Needless to say, I didn’t touch the definitions for the rest of the day.


 

r/Programming loved Hackterms, and I was on top of the subreddit once again. Visitor number hovered between 80-100, and the site wasn’t crashing!

Screen Shot 2018-03-03 at 5.16.47 AM.png

I was getting lots of positive feedback, and users were adding terms from my “most searched” list, which was a nice positive – the feature was appealing to users, and more importantly, guided eager new contributors. I continued to invite the most passionate contributors to a Slack channel, which gave me an opportunity to run ideas by (and gather ideas from) a group of dedicated followers.

Over the next few weeks, I obsessively checked Twitter for mentions of Hackterms, responded to every single user who signed up, and kept stirring the pot in the Slack community, getting some great feedback. I also became more active on Indiehackers, offering advice where I could, looking out for similar products, and asking for advice in return. One user even generously drafted a Hackterms redesign (which I continued to tweak, with Esther’s help).

header comparrison

Redesigned header (old header – top, new header – bottom)

I added a number of new features based off the user feedback, including this changelog to help my users (and, as it turns out, me!) keep track of the changes. Most significantly, I added cross-definition linking, automatically connecting terms to other definition on the website – I got a brilliant solution from my ex-Codecademy colleague, Reed (“just let the browser do the work!”).  I also added support for markdown and redesigned the search page header.

Screen Shot 2018-04-15 at 1.34.34 PM

Automatic cross-term linking


Screenshot_20180402-082033_Chrome.jpgOn April 1st, I was browsing the web, looking at all the awesome April Fool’s pranks, and thought to myself – I could do this next year! And then I thought – why wait till next year? I have 7 users total, four of which are my friends – I can afford to take a risk! So, within a few hours, Cluckterms was born.

Somewhere around this mark, to my amazement, the site crossed 1000 terms (~1100 definitions). The last nearly 100 came from a particularly enthusiastic user, and I challenged the Slack community (and prodded my close friends) to help get us over the finish line. 1000 is a completely arbitrary number, but it sounded significant to potentially talk to reporters. “Yep, our website is at 1000 terms.”

I also applied to YC. This was a scary thought, but a pretty straightforward process, and one that I felt I owed to myself. For years, I idolized YC, loved (and worked at!) YC companies, read YC advice, and dreamt of one day building my own thing. And now, I was able to actually build (with my own knowledge) something – maybe not great – but something that attracted users, something I could apply with. If you’re curious, you can watch my video here.

I didn’t know if I’d get in, and I kept my expectations low. On one hand, I had users and a working prototype. On the other, I didn’t see a billion dollar market and haven’t made a dime. I just built a thing I would have wanted, and it was clear others wanted it too.

My product management apprenticeship at the NYC Mayor’s Office picked up, and I found it harder to motivate myself to work on the site some evenings, though I was challenging myself to write even a little code in the mornings and during lunch. Growth slowed a bit, and I once again found myself in a little bit of a slump. Looking back at this blog post helped – because I was in the same place before I launched on Reddit in ~December/January, sitting around with 25 users.

I now had over a thousand terms, a cool design, and proof of concept. People seemed to dig Hackterms, and even during a total slog, the site grew little by little each day.

Screen Shot 2018-04-15 at 2.09.43 PM

The slow days

I needed to expose the site to a wider audience, get some press, and (gasp) make even a dollar in donations/revenue. If there was a distant feature where my life would involve making products useful to people, than I needed to prove to myself that someone was willing to pay me for my work. I didn’t know the whole path forward, but I knew there were three big (and scary) launches I could target:

  • Product Hunt
  • Hacker News
  • r/dataisbeautiful

Each of there could take my site down technically, and with criticism – but I had to move forward somehow, regardless of what YC said (and frankly, I had no expectation of getting in).

jQuery dropdown select on iPhones

This error had been driving me crazy for a few weeks – iPhone users were unable to submit new definitions on Hackterms. The culprit was hard to track down, because (1) I don’t own an iPhone and (2) there’s no way to console log. However, I finally sat down and figure it out.

To display error messages, I added a little flashing error message box right in the New Definition modal.

The culprit was some rogue jQuery (is there any other kind?) I was doing some routine validation to check if a user selected a category from a dropdown, and this line would not run on iPhones:

if($("select[name='category'").val(null)){}

This ran just fine on Android and desktops, but not on iPhones. The solution was simple, once I zeroed in on it (read – found the right StackOverflow thread): a vanilla JS replacement worked just fine:

var e = document.getElementById("ddlViewBy");
var strUser = e.options[e.selectedIndex].value;

Now iPhone users can use my site 🙂

Hackterms, Pt I

Laying in bed at night a few months ago, an idea came to me: Urban Dictionary for code.

Countless times, I’ve google how to do something, only to be bombarded by opinions and definitions for related tools. I’d google Rails and be told that I should use SQLite instead of Postgres, and that MVC is on decline and I should hook up to an MV * front end framework, and that I should install some NPM packages (but… Rails uses gems?) and so on. By the end of it, I’d be left confused and overwhelmed. My well-meaning dev friends weren’t much help – they’d dive into the complex of the topics, debating the performance merits of tools whose purpose was beyond me. I felt dumb. I just wanted to know a few things:

  • high level, what does this do?
  • how does this tool connect with everything else I’m building
  • is this worth learning right now?

A few days passed, and the idea nudged me again. I decided to share it with two subreddits – r/webdev, and r/learnprogramming. Reddit is fickle, and something interesting happened. r/learnprogramming gave me a score of 0 and a few cautionary comments, warning me this wouldn’t be useful.

Screen Shot 2017-09-25 at 9.06.19 AM

r/webdev on the other hand… said “hell yes”. I got a lot of really encouraging responses, and a spirited debate on the merits of such a tool.

Screen Shot 2017-09-25 at 9.05.10 AM

I knew I didn’t have a moment to lose. With such an overwhelming response, I had to build this thing. I was excited and more than a bit nervous – after all, r/webdev is a community of developers (though also more than a few learners!) Users also brought up some great concerns about the accuracy of oversimplified definitions, trolling, and similar resources.

Nevertheless, hundreds of real people had a passionate, largely positive response to my idea, and I was pumped. I set up a Trello board and started wireframing.

Screen Shot 2017-09-25 at 11.08.28 AM

Not pictured: fear and ambiguity

After QRL, I was fairly confident I could get at least an MVP of the app up – but this app was going to be entirely driven by the community, with visitors contributing to definitions (and ideally moderating.) The top comment was – open source this. I had no idea how.

* *  *

First attempt at instant search

I built the initial website with the core features – the ability to search and add new terms and definitions – pretty quickly. I also implemented my best attempt at “instant search”, sending a regex-powered request for similar terms after every single letter the user types. This was a fun mix of front-end design and coming up with a sorting algorithm.

Voting was a little trickier, but I managed to create a clone of a Reddit/Stack Overflow type upvote/downvote system.

Next, I turned to the user profiles and well as moderation tools. I wanted to create a minimal barrier to creating new definitions to deter lazy trolls and very casual users from cluttering up the websites. So, I decided that similar to reddit and Stack Overflow, anyone could vote on definitions, but users would need to log on to add new definitions. I wanted to keep accounts as slim as possible, so I created a reddit-style, minimal signup workflow – just a username and a password. No email confirmation, or password confirmation – at least not until the website got some users.

First attempt at signing up. Missing: password reset

I also implemented bcrypt, and finally started to thoroughly understand how hashing, and salting works. In the past, I’ve used the Rails gem (thank you, Michael Hartl!) but it never fully clicked. Now, hello, secure passwords!

Working on the front end made me really want to use components and nudged me to finally learn React (which I’m going to do next.) In the meanwhile, I learned to use Handlebars.js side by side with jQuery. This allowed me to move my definitions into a separate HTML file instead of jQuery-appending an incomprehensible string of HTML.

This also massively helped with the Admin board I set up next. I wanted to initially moderate submissions (again, to have a little control over the quality of the content), so I set up a system where I – or a moderator – would need to approve the first 5 submissions for every user, after which a user will have earned the websites trust, and could add endless definitions immediately. I figured, again, that this would deter the lazy trolls from making endless accounts to spam.

I spent a lot of time thinking about the relationship between quality control and the user-generated nature of the site. At this point, I had no idea how I’d moderate, or what standards I’d even set in place. I was okay with the idea of not-100%-correct definitions (and definitely wouldn’t be able to judge the quality of all the different CS topics), but I wanted to make sure that every definition on the site at least made an effort. I didn’t want the content to become heavily subjective, opinionated, or devolve into in-jokes. Moreover, I wanted to prevent posts unrelated to programming. I had described the site as an “Urban Dictionary for code”, drawn drawn to the “in-the-loop” nature of Urban Dictionary posters – not its vulgar, in-joke culture.

At this point, I wasn’t worried about the side getting flooded with trolls bent on posting offensive content – I didn’t think I’d ever be that popular. Rather, I was concerned about low-effort, inside-joke, trashy content (ex: “Rails is the sexiest disaster from 2004″)

* * *

A week in, my project was coming along nicely, and I had finished the core features and admin board. The last MVP feature left to build was comments – and I had some ideas for a cool, post-MVP way to improve these into a helpful, unique feature.

I was really grateful for my Trello board – it kept me on track, and I always knew what to do next. Whenever I got new ideas (which happened very often) or noticed bugs that weren’t worth fixing immediately, I’d just add them to the board and prioritize them, freeing up my mind to focus on the highest priority stuff. I noticed my mind wandering and darting between parts of the app a lot, so it was really helpful to turn to my board. It was also gratifying to see how much I had already done (despite how much work was left).

My Trello board, a week in.

Next up, I had two tasks that intimidated me:

  1. defining my post philosophy and guidelines (“what should ideal content look like”?)
  2. coming up with the website design.

* * *

Around this time, I began to get lost in my own project. In a rush to get things up and running, I coded quickly. That’s not to say I had no structure – in fact, I created my Trello board and schema precisely to know what I’d need to code next. However, I was now far enough into my project that my highest priorities weren’t always blatantly obvious, and I was faced with a lot of “important-but-not-crucial” tasks. I had prioritized my work into “must have/important/nice to have/not crucial” tasks, and I’ve built all the “must have” tasks. That is, I finished all the things that were absolutely crucial for the website to get off the ground. Users could search for definitions, vote, and add new definitions. I had set up a database, and deployed the site. Now, I needed to get through all my “important” tasks – things that were pretty crucial to a good experience, but not essential for the website to run – comments, notifications, mobile design, password reset, moderation, etc.

I found myself struggling to pick what to work on, and often reverting to small design fixes to feel like I was doing something, all while avoiding tackling the next big feature. Beyond a lack of prioritization, certain tasks (like adding notifications when a new term submission was approved or rejected, for example) mean that I needed to dive into the less-than-ideal structure I had set up.

The lesson seemed to be: prioritize harder, and plan ahead further. I kept seeing how important my Trello board/schema/wireframes were during the design process. As a friend described the process, I had consciously taken off my product and project manager hat, and put on my developer hat – and as a developer, I didn’t want to think about prioritization. I just wanted to code and solve technical problems.

That’s not to say I didn’t see this coming – I knew planning was important, which is why I tried to plan ahead to begin with. What I was discovering is that it planning was crucial, and I’d surely plan even more in my next project.

* * *

I had to constantly remind myself I was building an MVP, that I needed to get this to users as quickly as I could, and then learn from their feedback, not my impulses. Around this time, James helped focus me by reminding me that my job was to get out barebone versions of the crucial features, not polished versions of some features. I would often find myself digging deep into something obscure, like updating text fields on login without refreshing the page – inconveniences the early adopters would surely forgive me for – while ignoring tricky-but-crucial features, like password reset.

I also tried to keep impostor syndrome at bay. I was building an app with Express and jQuery,  with no front-end framework, no Mongoose, no ES6, with callback hell in my code – in short, with a million opportunities to stop and write better, fancier code with newer technologies.

Finally, I got the app to a place I was happy with it. The big test was simple: would users like this? If they did, I could spend more time with it and improve it based on their feedback. If they didn’t, I’d move on to something else – like learning a fancy front-end framework, Mongoose, and ES6.

The next step was exciting, but also new territory – getting beta users and seeding the site with definitions.

* * *

At this point, I showed the website to my girlfiriend Jenn and my friend James. They were the first to see the site, and it was immensely gratifying to watch someone go through it as a user would. Both found a number of bugs and inconsistencies, and were confused by a few of the same features, which was very helpful. James also noted that the website UI felt old and looked like a classic dictionary, counter to the product I was trying to build – a dictionary of cutting edge tech terms.

I now had two Trello cards full of bugs to patch and revisions to make, as well as a design makeover to tackle. The design bit concerned me, because design is not my forte, and I wasn’t sure how to revise it.  I’d love to say that I jumped back in and fixed everything, but in truth, I was frustrated and sad. I wanted my product to be perfect right off the factory floor, and it was far from it – even to the people closest to me (whose job was, of course, to be brutally honest). Discouraged, I took two days off and tried not to think about code.

* * *

Although it stung at first, my session with James was very helpful. He pointed to a few design elements that felt outdated – but more importantly, I started really paying attention to the websites I visited. Normally, I’d pull something up in the inspector if it caught my attention. Now, any time a website struck me as sleek or modern, I would pause and try to zero in on the elements and aesthetic that created this feeling. I’d then open up the Dev Tools and dig into why. I knew that I had made timid color choices – not knowing what to pick, I stuck to a few shades of green, beige, and brown. So, I tried to make the colors bolder. I paid attention to the way others use box-shadow and gradients, thin fonts, icons (at James’s suggestion).

giphy

UI, Take 2

At the same time, I wanted the site to be as fluid, unobtrusive, and undemanding of the user as possible. Nobody likes making accounts; nobody likes long surveys; nobody likes long load times. I wanted to get to the point and give the users what they came for – definitions, with no fluff. Inspired by Reddit’s outdated-but-straight-to-the-point design, I tried to cut out as much of this fluff as possible. Attention on the internet is precious, and a cute cat video is just a click away. My job was to eliminate any friction.

* * *

Six weeks later, I got to a point where the website was 99.5% done, and I knew I needed to launch it. I did 60% of the work in the first two weeks, and the last four were a mix of adding secondary features, obsessing over minor design elements, and not working with enough focus.

I knew I had to launch the site, because from here on, any further features would be dictated by my users. I had been avoiding this for weeks. James always suggested I was good at the “marketing” bit of things, but the truth is, I had no idea where to start. I did however, know that…

  • my idea seemed to get good traction, which is why I set out to build it
  • it’s better to have something out in the world with a few users than sitting in the drawer
  • I needed to move on. I’d either be working on Hackterms a while longer, or it’d flop, and I’d move on to something else – but I needed to know

I was sure that more people would show up to the site looking for definitions than to add them, so I needed to seed the site with some starting definitions – and get others to, because I’m not qualified to teach the world to code – before announcing Hackterms to the world at large. Splitting the site into a soft launch stage and a  hard launch would also allow me to effectively beta-test with anyone willing to try the site out.

I started with the low-hanging fruit. When I made my original reddit post, a whole bunch of users agreed to help. I reached out to them. I thought about their incentives, and what I could do to help motivate them – given my past work in crowdfunding, I thought about the best rewards. My contributors would be sacrificing their time, motivated by a desire to give back to the community. So, I needed to recognize their good motives. A small token, like a handwritten postcard, seemed like the easiest move (although that required getting the users’ addresses…). I also considered adding a badge to the site, but have always found these a little gimmicky. Finally, I knew I wanted to foster a sense of community amongst the contributors, so I started a Slack channel and created a subreddit, just in case.

I went back and re-read the Reddit thread, parsing 38 usernames of people who wanted to contribute. Next, I put together a Google doc of the top terms I’d want defined. A commenter suggested I use this roadmap as a starting point.

I was anxious to reach out to people. I thought of a dozen reasons Hackterms might fail; and of objections that people might raise to the site’s existence and purpose. I thought about the alternative resources available to developers: Wikipedia, StackOverflow, MDN, countless coding tutorials, FreeCodeCamp, you name it.

Nevertheless, I built Hackterms, and I believed in it, and Redditors thought this tool should exist. I had nothing to fear but the sting of rejection, and if I failed, well, at least I would have tried. I knew I’d be better off saying “I built something and put it out there” than “I built something and never went anywhere with it.”

I settled on a simple pitch – Hackterms was going to be the “why”, “where”, and “when” (importantly, not the “how”) of coding terminology. So, I took a deep breath, and reached out to the first 10 users.

Hey [user],

About a month ago, I shared an idea for a dictionary of coding terms and you offered to help. Well, I’m excited to tell you that I’ve spent the last month building the site, and could now use your help!

In order to launch, we’ll need to have well-written seed definitions for common programming terms to attract initial users and get the ball rolling. Would you still be interested in creating an account and writing a few short definitions for common tools/concepts you use as a developer? Early users such as yourself will help set the tone and shape a direction for the site, and I’ll send you a vintage NYC postcard to thank you for your time (and swag down the road, if things go well!)

Let me know if you’re interested – hoping to work together on this!

~ Max

* * *

I ended the day with just one user, but 6/10 (!) people responded positively. The next morning, I woke up to 3 users. The feeling of seeing people sign up for my thing – trust me with their accounts and their time – and seeing the supportive messages – is incredible. Users were also adding great definitions. I quickly threw together a metrics page to track how many users and definitions I was getting. I also had a few people join a Slack team I created. Messages like this kept me going, despite the fact that I was intimidated  every step of the way by the fact that most contributors were probably way more knowledgeable than me. A scary thing about coding for me has always been that there are so many ways to do something, and I’ve always worried that the way I’m building isn’t the best or most correct way. Still,

Four or five days later, I was nearing 15 users and ~50 definitions, and had nearly exhausted my initial list of 38 redditors. I wasn’t sure how many to aim for, but wanted to fill the site with enough definitions for the most commonly searched terms before launching – perhaps 100? 200?

I made a few valuable observations, and it felt amazing to learn from users, not my own intuition.

  1. A bunch of new accounts only made one definition. Although I was approving them as quickly as possible (checking for new submissions incessantly), I thought that not immediately seeing their submissions must have turned these users off. James had suggested approving first and moderating later, so that’s what I did. I worked hard on my submission queue, but that didn’t matter. I turned it off, and auto-approved all definitions, at least while I could manage submissions.
  2. Users who added new definitions used the “related feature” field almost every time, and then followed these to add new definitions. They seemed to naturally follow a train of thought about a particular area of coding. I did this as well – I added “JSON”, and then “AJAX” followed naturally.

A week after that, I crossed 50 terms with almost 60 definitions. I also got a great code review and design pointers from my friend Esther. From here on, I knew I needed to generally refactor the code to avoid callback hell, add the ability to edit definitions, and most importantly – I needed more users.

Screen Shot 2017-11-27 at 11.37.27 PM

UI, Take 3

But, I had gotten Hackterms off the ground. It was my first real user project since TheyGotFit, and it was live, and it was being used by real people – by strangers on the internet.

I stepped back and thought about getting more definitions, as well as a real public launch. More on that in Part II, hopefully soon.

Keycode 229

I ran into a pretty obscure mobile JS error, and thought I should share it, on the off-chance someone else finds it as well.

I had built a click event attached to key codes corresponding to letters and numbers – so, when the user hit a letter, punctuation, or number, my event would fire.

if((e.which = 48  e.which = 90) || etc...){

This worked wonderfully until I tested on Android. My event wasn’t firing. It was working in DevTools mobile view, but not on my phone. Worse yet, there’s no easy way to see a console on mobile, so I resorted to appending console.log messages right onto my live site (since I couldn’t run localhost:3000 on mobile). Not pretty.

Anyway, I got to the root of the issue. Every key on mobile was registering as

e.which == 229

This wasn’t happening on mobile view on DevTools because the key codes were being read correctly. Turns out this is a well known issue. My solution was simple – to fire the event on e.which == 229. However, this didn’t actually give me the key code.

It seems like there’s no ideal solution. Some people suggest using a keypress and textInput listeners, which works intermittently. There is also this workaround, which involves getting the last character typed in a field and determining its char code (a number representing a Unicode character, found through the keypress listener), not key code (a number representing the key on the keyboard the user pressed, found through the keydown/keyup listener). Here’s a neat tool to differentiate.

Here’s the jQuery solution, though I think you can accomplish the same thing by using a keypress listener.

var text = $("#some-textarea").val();
var keyCode = text.charCodeAt(text[text.length - 1]);

 

Little Annoyances

I am dealing with a set of small front-end problems that’s driving me crazy. I’m trying to integrate instant search, and every time I change a small bit of code, it stops behaving as I expect it to.

These individual problems are not hard, but digging through my (messy) front end structure to fix them a chore and a motivation killer. Every time I run into another one, I want to throw my hands up and turn to something distracting – which completely kills my productivity

I need to learn to recognize and ruthlessly prioritize issues like these. In the structure of my app, these issues are not important, and should be dealt with when I work on the front end design – not while I’m trying to build out the back-end structure. Sure, it’s annoying to see the issues pop up every time I try to test the app, but I need to compartmentalize and move on.

When these little annoying issues accumulate, they can be major motivation killers – but only if I let them.

I just need to remember: “There will be time to fix these. I will deal with them – later.”

Handlebars + jQuery

Goal: add HTML components using jQuery without writing a whole bunch of HTML as a string. Instead, we’ll store HTML components in their own files, to be recycled as needed.

Old solution: 

$("#element").append("
<div><span class='myClass' id='" + relevantId + "'>" + relevantID + "</span>Some text about " + jsVariable + " element</div>
”)

Better solution: 

React.js has been been on my “to learn” list for far too long, and I swear I’m going to tackle in next – I think (hope) that it’ll solve my components issue.

Meanwhile, however, I’ve had to get around with jQuery, and have been running into a more and more persistent problem. Appending HTML with jQuery is easy for small bits of code, but ridiculous for larger snippets or components. Fortunately, I just found a solution: Handlebars.js, a front-end templating engine. We’re going to store our HTML snippets/components in their own files, and inject variables as needed. So, if we’re coding a Twitter clone, each post would be built from a post.html file, with relevant data inserted from JS using {{variable}}. Our files will look like this (sorry about the screenshot – WordPress formatting sucks)

Screen Shot 2017-09-28 at 4.31.37 PM

Handlebars will compile these HTML files with the variables we’ll provide it with. By the way, if this is looking familiar, it’s because it’s how things work in Angular and React, too (as far as I’ve seen), but we’re implementing it without the massive learning curve or intensity of using a whole front-end framework.

So, let’s get down to business. Here’s the step by step:

1. Add a Handlebars CDN link to your HTML file. Grab the latest version here. I recommend the handlebars.min.js file, since it’s a little smaller. Add the file in a ​script tag, before jQuery and your own JS script.

2. Create a .html component file following the convention above, inserting variables with this format: {{var}}. You can also use a bunch of other pre-built Handlebars conventions outlined in the documentation. I’ll call this file component.html.

3. Now, we’re gonna need to fetch the contents of the HTML file in your JS script. You can do this using jQuery’s $.get() method. The code will look like this:

$.get("component.html", function(data) {
  // do something with the data here...
}, 'html')

4. Next, a bit of Handlebars magic – this should go inside the function(data).

Note: function(data) is a callback function – it’s only run once $.get() fetches the contents of the external file. Because JS is asynchronous in nature, it will continue to run code written the $.get() section as the app waits for the $.get() to return the data from an external resource (in our case, a file).  So, in order to operate on the data after we fetch it, we pass in a callback function with the data as a parameter.

 var myTemplate = Handlebars.compile(data);

// this is where we store our variables
var context= {
 "name": "Biz Stone",
 "post": "I am a very important person at Twitter"
 };

// Handlebars magic injects our variables into our template
var compiledPost = myTemplate(context);

// this should look familiar
$("#postSection").append(compiledPost);

5. Voila! We’ve now appended a whole HTML snipped with our own variables. No more messy strings in our JS/jQuery code.

PS: With a little patience and regex string manipulation, it’s possible to get rid of Handlebars altogether and search the $.get() html response for variables encased in {{ }}, replacing them with your own values. However, for the sake of speed, I leave that exercise to the reader.

Building a Starfield with Canvas

I’ve played around with HTML canvas before, and have even made a field of twinkling stars. However, this time, I wanted to recreate an old Windows screensaver where you fly through a star field. In short, something like this.

This proved to be harder than I expected. My first real attempt had stars increasing in size and velocity as they got further from the center.

starfield1.gif

First attempt

The problem with that is that no stars don’t seem to move towards us, just off the side of the screen – so, I’ve essentially created a particle emitter. You’re watching stars be created far in the distance, and the sense of depth comes from the changing star size as they get closer to the edge of the screen.

This wasn’t good enough. I wanted each star to actually occupy a position in 3D space, and then have this position be reflected in the view, creating a more realistic simulation. I wanted to create a sense of a star coming right at you. The issue is that stars have to come off the screen eventually no matter what – so, as a star gets closer to you, it needs to (1) increase in size and (2) move toward the edge of the screen

This really tripped me up for a few days. I couldn’t wrap my head around how I can represent 3D space in 2D. I was using the pythagorean theorem in 2D, using each star’s X and Y coordinates. I knew I needed to integrate depth, Z, but I wasn’t sure how. Then, after a few days, it hit me. I watched several working starfields closely to isolate what was happening, and I realized that a star’s distance from the origin point – the center, in my case – impacted two things:

  1. the size of the star (stars closer to the us appear larger)
  2. the speed with which the star travels (stars closer to us appear to move faster)

Then came the epiphany. Stars can only move along X & Y axes, but the speed with which they move implies their distance from us. 

Using this, I could now create a sense of depth. Two stars can occupy very close X & Y coordinates – sit next to each other – but if one is bigger and moves towards us faster, we perceive it as being much closer. It took me a bit of time to finagle the relationship between the Z coordinate and the star size/speed, but once I did – voila! The starfield looked much more realistic:

giphy

Second attempt (with depth)

So, how does it work? I created a star object with X, Y, and Z coordinates. At each screen refresh, I needed to change each star’s…. (1) x & y coordinates and (2) size, as determined by its Z coordinate.

(1) Each frame, I would get the distance between the star and the center, and would move 2000th of this – times the star’s Z coordinate (which is between 1 and 100+). Thus, stars with higher Z coordinates – which means the star is closer to us – would move off the screen faster.

(2) A star’s size would similarly grow as the star’s Z coordinate increased, and the star got closer.

// each frame...

star.size = 0.2 + 0.038*star.z; 
// a star is always at least 0.2 px in radius + some fraction of 3.8, based on distance - 0.2-4px radius total

star.x += (star.x - center.x)/2000 * star.z; 
star.y += (star.y - center.y)/2000 * star.z;

star.z++;

It took me 4-5 days to come up with that relatively simple bit of code – to wrap my head around the relationship between a 2D screen and a 3D space. Once I did, I decided to gamify my new project. More on that soon!