Until recently, it was accepted that HTTPS was a must for sites that collect sensitive data from users, but a mere nice-to-have for other sites. It’s no great stretch to grasp why things like addresses, social security numbers, and credit card information should only be sent to trusted services and be encrypted over the wire. But more casual traffic -- browsing, entering an email address here and there -- seemed harmless enough to blast over plain old HTTP.

The world marches on whether anyone intends for it to or not, and it’s similarly no great stretch of the mind to sense that “old” mentality’s shortcomings against present realities. HTTP defines our daily lives in ways I’m sure even the Internet’s pioneers did not foresee. The prevalence of these communications, the commodification of data, and the continued uncertainty of how to treat the Internet at the policy level means that for actors in that vast and wondrous force there are new responsibilities around the protection of personal data. As the office of the U.S. Chief Information Officer succinctly puts it -- there is no such thing as non-sensitive data.

There are other implications too. It’s safe to assume search engines now weigh security as a signal in search rankings. Modern browsers make known their own opinions on the matter. And Service Workers -- the technological core of progressive web apps (PWAs), the web app paradigm that ambitions to rival the experience of native mobile apps -- requires HTTPS.

The world’s marching on, indeed. A few months ago, we decided it was time for Condé Nast to keep pace and make HTTPS on our sites a priority.

22 brands, 22 reasons to stay DRY

Currently, each of Condé Nast's brands' sites is an independent web application. We're beginning some interesting experiments that would change that, but the present arrangement places a special demand on our engineering teams to share knowledge, build abstract and centralized solutions, and continually seek improvements to code and process when we do have to iterate. In mid-2016, the engineering team at Wired boldly took the plunge to begin moving www.wired.com over to HTTPS. The migration was carried out in a phased approach, moving each "page type" -- i.e., route -- over to HTTPS one-by-one. This allowed the team to carefully monitor issues and impacts following each.

After some early concern around search traffic and making some tweaks to parts of the site, the effort was an eventual success and www.wired.com is now served fully over HTTPS. There were many learnings to take away for the broader Condé Nast engineering team’s benefit. However, while a phased approach was suitable and necessary for the trailblazers, with 21 more sites to migrate -- each having anywhere from a half dozen to a dozen different page types -- tackling each site in phases would have made for a protracted and onerous organizational undertaking. The next brand site migrated to HTTPS, we decided, would be tackled all at once in a full-site cutover. Additionally, that project would have a special focus on operationalizing the cumulative learnings thus far into an “HTTPS playbook” of sorts. Comprehensive documentation and well-transmitted knowledge would be key to avoid costly learning repetition for subsequent brand migrations. W Magazine’s site, www.wmagazine.com, was the lucky winner.

"Fashion is [a small and manageable amount of] danger"

As a media voice, W Magazine (we’ll call it “W”) proffers refined but relatable critique on fashion, pop culture and celebrity happenings. W’s digital team is excited about the potential of PWAs to creatively engage audiences, and with monthly unique visitors approaching 3 million the sample size was “just right” for taking some risks but clocking meaningful results.

So what are the risks? Well, aside from your standard technical fumbles, there’s a risk of lost search engine ranking. In this case, we’re literally changing every URL on the domain, throwing the "link equity" of each to the mercy of search engines to detect and respond to the change. In order for the new URLs to amass equity again, we have to wait for search engines to crawl the HTTPS version of the site and index the new URLs in place of the old ones.

The amount of time that process takes, and what that means for the website’s viewership, embodies one of the risks of an HTTPS migration. On the optimistic side, recently there've been indications that Google is taking measures to mitigate some of those concerns to help encourage HTTPS adoption. Wired's experience, too, suggests cautious optimism.

Nevertheless, to minimize the risks, we approached W's effort with special meticulousness. There were many nuts and bolts of the stack that would need to be adjusted to prepare for the switch. We conceived of this effort in six phases, our journey through which is detailed below.

Phase I: Static assets

Before delving into anything SEO- or infrastructure-related, there was an obvious low-hanging fruit: static assets. Media, third-party scripts, ads -- if the page is visited on HTTPS, all of these secondary requests a browser initiates need to be requested with HTTPS as well. Otherwise, the browser will treat the page as having mixed content, and likely block the insecure content from being downloaded. Without all of its needed resources your site will not behave as intended. Blocked requests might look something like this in your browser's network inspector:

Coming into this effort for W, we had a list of known resources and third-party integrations that would need to be audited, but figured the best starting point might be to simply see what was happening. We picked a few key page types, visited a sample page for each using HTTPS, and looked -- manually, with our eyes -- at the browser’s network inspector. Sorting by scheme, we could easily see which resources were still being requested over HTTP. The thought was that if we identified and dealt with the offending requests on the most prominent page types, it seemed likely the task would become simpler when repeated on remaining page types.

From there “dealing with the offending requests” was a matter of finding them in our code and switching URLs hardcoded with “http” to “https” (here it bears repeating that protocol-relative URLs have fallen out of favor). There's no downside to requesting resources over HTTPS if the page has been loaded over HTTP, so these changes could be deployed to production before the actual cutover (we'll see later that some code changes are in fact time-sensitive).

A lot of the offenders ended up originating from shared internal library code, so having fixed those in this effort future brand sites migrating will simply need to upgrade their versions of those libraries to get the fixes (almost) for free. After implementing the fixes we tested the remaining page types -- and, as hoped, no new offenders cropped up.

You might be wondering a few things. First of all, how does one test HTTPS in a development environment? You have a few options:

  • Ngrok -- The 'quick and dirty' way. When you fire up ngrok, it provides both an HTTP and HTTPS tunnel to your local service. Quick enough. The dirty part? If you're developing from within your organization's secure, firewall-protected network, this opens up an access channel from a third-party service into that network. Whether that's an acceptable risk is a conversation to have with your organization's security stakeholders.
  • "Direct" local TLS -- You can create a certificate on your local environment and configure your app’s HTTP server to access it. I found DigitalOcean’s guide helpful; although it is specific to Ubuntu and Apache much of the instruction is generally relevant. This approach avoids the security concerns, but some may find it undesirable to tinker with application code to support HTTPS testing.
  • Stunnel -- This tool can be used to provide a layer of TLS between a nonsecure service and a client. You can configure a local instance of stunnel to connect HTTPS browser traffic with your local, nonsecure application instance without any changes to the latter. Avoiding both security concerns and having to touch application code, this one's the winner in my book.

Remember, the purpose of this phase is to surface any problematic static asset requests occurring in the browser. While the above testing tools may not offer a precise analogue to your eventual production setup, local HTTPS is useful to that end and for certain other tasks in the migration.

The other thing you might be thinking is that, gosh, it all sounds so manual. To that I would reply -- yes, it was manual. Although we took this decidedly manual approach for this phase for W, there have been some ideas tossed around for automation -- for instance, leveraging PhantomJS, or Google's Lighthouse tool -- we’re hoping to try out in the next brand migration.

Phase II: Application URL logic

Okay, we’ve cleaned up all of our static assets -- now things start to get interesting. What about links that appear on the rendered page? When search engines crawl our site, they will visit those links -- we want to be sure the URLs use HTTPS. We will also need to update our sitemap XML files, and similarly any other files that are accessed by feed services (e.g., RSS) to use the HTTPS version of URLs. And of course, perhaps most importantly, we want to make sure the canonical URL embedded in the metadata of each page is composed to use HTTPS as well.

For W, we conceived of this phase as four smaller steps:

  1. Hardcoded application URLs (e.g., footer links to static content)

  2. Canonical URL generation logic

  3. Sitemap / feeds generation logic CMS

  4. content URLs (i.e., links CMS users may have written directly into content)

The good news -- step 1 turned out to be a rather prosaic exercise in find-and-replace, and steps 2 and 3 were two birds knocked down with the same stone thanks to certain URL-generation logic having previously been centralized.

The bad news -- step 4 turned out to be a bit involved. We’re still working through our process for updating links buried within CMS content -- hundreds of thousands of records potentially dating back many years. Weighing the SEO risk posed by this gap, we decided it was small enough to proceed with our overall migration and complete this as a post-launch task.

But wait, won’t there be problems if we make these changes before the site itself serves over HTTPS? Yes -- we don’t want the canonical URL of every page to reference an HTTPS URL, for instance, if that URL is redirecting to the HTTP version (as is currently the case). That's a recipe for rapidly lost site ranking and, presumably, a near future spent seeking your SEO stakeholders' forgiveness. In order to synchronize these changes with W’s full HTTPS launch we deployed them behind a feature flag. Utilizing a feature flag adds a little overhead, but is nice because it decouples the timing of writing and merging code from its activation in production without having to maintain a potentially long-lived feature branch. Activating the HTTPS feature flag then simply becomes a launch checklist item.

Phase III: Infrastructure

We’ve got our application code tuned for HTTPS, but we need to sort out how our infrastructure will support it. In our case, www.wmagazine.com is fronted by a CDN (we use Fastly), and we planned to terminate TLS there. Previously, we had set up a subject alternative name certificate -- a single certificate for a group of domains -- within Fastly. This certificate covers all of the Condé Nast brand domains and is used in the TLS "handshake" with clients visiting the sites.

Before the cutover, we redirected HTTPS traffic to HTTP: client requests hitting the CDN with HTTPS would be served a 302 redirecting them to the HTTP version of the URL. Inverting this -- redirecting browsers from HTTP to HTTPS -- would be the crux of our switch. Remember, most people find their way to the site by clicking links in web search results or on social media; if those links before the cutover are HTTP versions, after the cutover, in time, the HTTPS versions will be the ones surfaced and shared, and the CDN redirection will be avoided entirely. We'll also tweak the HTTP status code used for the redirection from 302 to 301 ("Moved Permanently") to indicate to browsers and search engines that the new HTTPS version of the URL is permanent.

On top of that, there was another layer of protection we planned to leverage, which you may have heard of: a HTTP Strict Transport Security (HSTS) header. We didn't set that up in our staging environment, though, so it's explained further below in the post-launch phase.

Given the above, our task for activating HTTPS support on our staging environment consisted of:

  1. Activate the code changes described in Phase II by flipping our feature flag on

  2. Remove the HTTPS -> HTTP 302 redirect from our CDN configuration

  3. Add the HTTP -> HTTPS 301 redirect to our CDN configuration

With that, staging was rolling with HTTPS! For a few brief minutes we basked in the glory of the pleasing “secure” UI symbology appearing in browsers upon visiting the site:

Phase IV: Testing

But we were only just getting started. We had a few tricks up our sleeve to help us catch any issues:

  • Crawled the site using Botify to detect any SEO red flags

  • Ran our main page types through Google's Lighthouse tool to detect some common issues (we're in the process of building our own full-featured site-crawling wrapper around this tool)

  • Set up a Content Security Policy (CSP) header, leveraging the report-uri directive to send record of any lingering hardcoded HTTP asset requests to our internal event reporting service

  • Verified the security quality of the domain’s hosts using Qualys

That's all in addition to manual QA, which was a coordinated effort across a few teams (the W engineering team, site quality, analytics, and revenue technology).

Phase V: LAUNCH!!

After about a week of “marinating” on the staging environment we made the switch in production to serve the site via HTTPS. The changes to the production environment were similar to those in staging, but were particularly sensitive to being executed in the right order:

  1. Deploy changes necessary to continue support for wmag.com vanity domain

  2. Activate the feature flag from Phase II

  3. Make the redirection changes to the CDN configuration

  4. Add the CSP header

Whenever something moderately large launches around here, we have a ritual of blocking off time on the calendar, gathering in a room, and sitting around in front of our laptops to monitor all the charts, alerts, and channels as we make the switch. Afterwards there’ll be emphatic announcement emails and a cavalcade of Slack reactions. It’s a way of enjoying the splash, the adrenaline rush of doing something big and important. But as engineers we know it’s mostly for show. We strive to have enough confidence in our tools and processes that every launch is a boring non-event: a few deft flips of switches here and there and the application changes state, quietly and undramatically, and carries on!

Phase VI: Post-launch

Of course, as engineers, we also know there’s always a chance for the unexpected. The CSP reporting will help us isolate problematic assets, and our indispensable monitoring tools (Datadog, Splunk) surface anomalies in traffic and application behavior. Other folks outside of the engineering have been watching traffic metrics closely with their own tools as well. So far, there's been no major departure from the status quo!

Aside from monitoring for the unexpected, there were a few final pieces of the migration to be carried out after the switch.

Search-Engine Related Updates

Immediately following the cutover, we made a couple SEO-related updates:

  1. robots.txt -- Changed the XML sitemap references within the /robots.txt file to point to the HTTPS sitemaps.

  2. Google Webmaster Tools -- After adding the website into Google Webmaster Tools, we verified ownership of the HTTPS website using their supplied static HTML file. We then proceeded to submit each XML sitemap and RSS feed into the list of sitemaps.


After 2-3 days of running HTTPS smoothly, we added an HSTS header alongside the CSP header to be included in responses from the CDN. In a nutshell, this header instructs browsers to access a domain using HTTPS only, and to "remember" to do so for a certain timeframe. Ideally, that TTL should be at least six months -- but we will scale the TTL up gradually as our chance of having to roll back to HTTP decreases with time.

Chalk it up as a "W"

With that, we're proud to say that www.wmagazine.com is now fully HTTPS! And in fact, in the time between that launch and the publishing of this post, the Glamour team has also managed to get www.glamour.com onto HTTPS as well! Keen to get all the sites migrated this year, we've kicked off the effort for other brands too.

(Oh, and as you may have spied by now, this technology blog is on HTTPS :)

Having gone through this process thrice now as an organization, we're thrilled to see our collective knowledge become concrete enough to be able to produce not only a half-coherent blog post but a comprehensive playbook for the remainder of our brand sites to make the leap. Ultimately, it's not a lot of work -- it's more a matter of understanding exactly what steps need to be taken and making sure they're taken in the right order, at the right time.

We're excited what this means for Condé Nast, and equally excited to share the knowledge here, in the spirit of making the Internet continue to be an open and safe place for all!