Blog

  • The Nanny Scale

    Lets say you’re writing a Slackbot that plugs into an LLM. When you send the LLM the prompt, instead of only sending the user message to the LLM, you could send all the Slack data associated with that message including the message itself. Then you could give the LLM tool calling access to the Slack API to perform a range of lookups using data from the Slack request, including for example the Slack user ID of the message sender. To continue this example, the LLM might use the API to look up the username and full name of the message sender, so it can have a more natural conversation with that person.

    For the moment, forget about the specifics of implementing a Slack user interface. The question here is, do we give the model all the data and let it call tools to do things with that data when it determines that would be helpful in building a response? Or do we nanny the model a bit more (provide more care and supervision) and only give it a prompt that we’ve crafted, without ALL the available metadata?

    I’m going to call this The Nanny Scale and suggest that as models continue to get smarter we’ll move more towards increasing model responsibility. It also varies based on how smart the model is you’re using. If it’s an o1 Pro level model with CoT and tool calling capability, maybe you want to give it all the metadata and as many tools as you can related to that metadata, and just let it iterate with the tools and the data until it decides it’s done and has a response for you.

    If you’re using a small model and then further quantizing it to fit into available memory, thereby risking reducing it’s IQ even further,  you probably want to nanny the model, meaning that you increase the care and supervision and reduce the responsibility the model has, and you pre-parse data, removing anything that can cause confusion but may be potentially useful, and reduce the available tools, if you provide any at all.

    It’s clear that a kind of Moore’s Law is emerging with regards to model IQ and tool calling capabilities. Eventually we’re going to have very smart models that are very cheap and that can handle having an entire API thrown at them in terms of available tools. But we’re not there yet. Models are expensive, so we like to use cheaper less capable models when we can, and even the top performers aren’t quite ready for 100% responsibility.

    So as we’re building applications we’re going to have to keep this in mind. We’ll launch v1, models will evolve over several months, and for v2 we’re probably going to have to slide the nanny scale down a notch or two or risk shielding our customer from useful cognitive capabilities that are revealed when a model takes on more responsibility.

  • Amidst the Noise and Haste, Google Has Successfully Pulled a SpaceX

    In 2013 Google started work on TPUs and deployed them internally in 2015. Sundar first publicly announced their existence in 2016 at I/O, letting the world know that they’d developed custom ASICs for TensorFlow. They made TPUs accessible to outside devs via Google Cloud in 2017 and also released the second generation that same year. And since we’re plotting a timeline here, the Attention is All You Need paper that launched the LLM revolution was published in June of that same year.

    OpenAI got a lot of attention with GPT4, a product based on the AIAYN paper, putting LLMs on the map globally, and Google has taken heat for not being the first mover. OpenAI last raised $6.6B at a $157B valuation late last year, which incidentally is the largest VC rounder ever, and they did this on the strength of GPT4 and a straight line trajectory that GPT5 will be ASI and/or AGI, or close enough that the hair splitters won’t matter.

    But as OpenAI is lining up Oliver Twist style asking NVidia if “please sir, may I have some more” GPU for my data center, Google has vertically integrated the entire stack from chips with their TPUs, to interlink, to the library (TensorFlow) to the applications that they’re so good at serving to a global audience at massive scale with super low latency, using water cooled data centers that they pioneered back in 2018 and which NVidia is getting started with.

    Google has been playing a long game since 2013 and earlier, and doesn’t have to create short term attention to raise a mere $6 billion because they have $24 billion in cash on their balance sheet, and that cash pile is growing.

    What Google has done by vertically integrating the hardware is strategically similar to SpaceX’s Starlink, with vertically integrated launch capability. It’s impossible for any other space based ISP to compete with Starlink because they will always be able to deploy their infrastructure cheaper. Want to launch a satellite based ISP? SpaceX launched the majority of the global space payload last year, so guess who you’re going to be paying? Your competition.

    NVidia’s margin on the H100 is 1000%. That means they’re selling it for 10X what it costs to produce. Google are producing their own TPUs at scale and have been for 10 years. Google’s TPUs produce slightly better performance than NVidia’s H100 and is probably on par when it comes to dollar per compute. Which means Google is paying 10X less for GPU compute than their competitors.

    And this doesn’t take into account the engineering advantages derived from having the entire stack from application to chips to interconnect all in-house, and how they can tailor the hardware to their exact application and operational needs. When comparing NVidia to AMD, the former is often described as having a much closer relationship with developers and releasing fixes to Cuda on very short timelines for their large customers. Google is the same company.

    As a final note, I don’t think it’s unreasonable to consider the kind of pure research that drives AI innovation as part of the supply chain. And so one might argue that Google has vertically integrated that too.

    So amidst the noise and haste of startups and their launches, remember what progress their may be in silence.

  • My 2025 AI Predictions

    The $60 million deal that Google cut with Reddit will emerge as incredibly cheap as foundational model providers realize amidst the data crunch that Reddit is one of the few sources of constantly renewed expert knowledge, with motivated experts in a wide range of fields contributing new knowledge on a daily basis for nothing more than social recognition. The deal is non-exclusive as was demonstrated by a subsequent deal with OpenAI, meaning Reddit will begin to print money.

    Google’s vertical integration of hardware via their TPUs, their software applications, and their scientists inventing the algorithms that underpin the AI revolution is going to begin to pay off. Google will launch a number of compelling AI applications and APIs in 2025 that will take them from an academic institution creating algorithms for others, to a powerhouse in the commercial AI sector. Their cost advantage will enable them to deliver those applications at a far lower price to their customers, and in many cases, completely free. Shops like OpenAI lining up for NVidia GPUs will be the equivalent of a satellite ISP trying to compete with Starlink who have vertically integrated launch capability.

    DeepSeek will continue to demonstrate unbelievable cost reductions after delivering V3 for less than $6 million as the group of former hedge fund guys continues to sit in a room and simply outthink OpenAI, which has been hemorrhaging talent and making funding demands approaching absurdity.

    OpenAI will be be labeled the Netscape of the AI revolution and be absorbed into Microsoft at the end of the year. But like Netscape, many of their ideas will endure and will shape future standards.

    As companies like Google and High-Flyer/DeepSeek prove how cheap is to train and operationalize models, there will be a funding reset and companies like Anthropic who raised a $4 billion series F round from Amazon in November will need to radically reduce costs and we may see down rounds.

    We will see new companies emerge that provide tools to implement o1 style chain of thought in a provider and model agnostic way. Why pay o1 token prices for every step in CoT when some of the steps can be done by cheaper (or free) models from other providers?

    China will continue to rival the USA in AI research and in shipped models. The new administration will rethink the current limits on GPU exports which will prove ineffective at accomplishing their goals of slowing the competition.

    And finally my personal hope is that the conversation around the dangers of AI will shift from a fantastic Skynet scenario to the practical reality that out of the $100 trillion global GDP, $50 trillion is wages, and that is both the size of the AI opportunity and the scale of the global disruption that AI will create as it goes after human labor and human wages.

    We need to acknowledge this reality and hold to account disingenuous companies and founders who are distracting from this through AGI and ASI scare mongering. This “look at the birdie while we steal your jobs” game needs to end. The only solution I’ve managed to think of is putting open source tools and open source models in the hands of the workers of the world to give them the opportunity to participate in what could, long term, become a utopian society.

  • Colorado Mountain Wave in a Cessna 206 Turbo

    I thought I’d document my experience in Colorado Mountain Wave in my 206 on a recent flight from Idaho to Centennial, CO. The details are as follows:

    • Plane is a 206 Turbo with G1000 NXi and I imported the data from the data card into CloudAhoy which is how I made the video.
    • Wind that day was 270 at 35 KTS at 15,500 MSL which was my crossing altitude.
    • Crossing point was the East Portal which is a common crossing point for GA VFR aircraft. It’s kind of like a low wall with taller peaks north and south.
    • I think there was a high altitude temperature inversion that day but unfortunately did not confirm that and I can’t find a historic source for winds aloft to confirm this. Let me know if you know of one.
    • As I approached the rockies from the west, lenticular clouds were visible to the north and south with a cloud above me that may have been a lenticular cloud.

    I normally cross the East Portal at around 14,000 but after seeing the conditions for mountain wave I climbed up to 15,500 to give myself a bit more room. As I crossed the ridge I had the autopilot enabled and I encountered a descending column of air – perhaps at 1000fpm. The autopilot did its job and worked to maintain 15,500 which meant the plane entered a climb within that descending column of air and I watched my airspeed slow down from about 123 indicated to 97 IAS.

    I’d like to dwell on that for a moment. A friend of mine who is an accomplished instrument rated pilot was in a higher performance aircraft crossing the Sierras from west to east at night in IMC and encountered the same effect. A westerly wind was crossing the Sierras, which is also famous for mountain wave effect and creating a downward flow on the leeward side. The autopilot tried to maintain altitude and the plane slowed down enough that when he finally noticed it really got his attention.

    I was told this story in person and reading between the lines this left a strong impression on him. I think part of the reason is that in the higher performance plane his stall speed was fairly high, so the potential to have the autopilot drop the airspeed enough to stall the plane was higher. I don’t know if his plane had Garmin ESP. Mine does, and it will nudge the nose down if the plane appears to be entering a power on stall. I plan to verify this.

    Once the airspeed crossed 100 kts and continued to drop, I turned the AP off and pointed the nose down. At this point the airflow across the ridge and in the leeward side at 15,500 was laminar – very smooth. You can see this in the video below if you look at the G meter in the top left.

    As I descended and moved further away from the ridge, I think what happened is I moved below that laminar flow. As I got out into the foothills and at around 13,700 MSL which was 5,200 AGL I was hit with moderate turbulence. In FAA terms, moderate is passengers screaming and crying and severe is the airframe at risk of taking damage. I’d say I was on the high side of moderate.

    I experienced plus 1 G (or plus 2 if you want to nitpick and point out that 1G is at rest) and negative 0.5 to 0.7 although I haven’t verified this. You can see the G meter in the video. The aircraft can handle +3.8G and -1.52 flaps up.

    I got one thing right and made two mistakes. Crossing at a much higher altitude and maintaining that higher altitude than normal give me plenty of room AGL to recover from a stall/spin scenario. The first mistake was to descend at an airspeed that was far too high given the potential for turbulence. I hit the turbulence at around 150kts indicated and within a second due to wind shear the ASI jumps to 163.

    I pulled power at that point and gently nosed up to avoid increased G loading on the airframe, but was doing everything I could to rapidly reduce my speed. The turbulence was short and sharp. The G loading wasn’t particularly high in terms of what the airframe can stand at my weight and balance, and given that I’d burned most of my fuel, but the short sharp nature of it was of real concern.

    I then made the second mistake which was to slow down too much. I think I got as slow as 83 indicated, which is way above the published clean stall speed of 62 KCAS but given the wind shear, I should have kept it at maneuvering speed which ranges from 106 KIAS to 125 KIAS with the lower speed being for lighter aircraft loading. I was probably around 110 KIAS given my fuel and load. 105 KIAS would have been a safe speed to do the entire descent.

    I was talking to Denver Departure so called them up and gave them a moderate to severe turbulence PIREP which I revised a couple of minutes later to Moderate.

    Right before I encountered the turbulence I saw my VSI indicate I was entering an ascending column of air. It’s hard to tell how fast it was ascending because I nosed up to reduce airspeed. My VSI peaked at 3100 fpm. But I then encountered a descending column of air which was also 3100 fpm with pitch level. That’s around 31 knots of descending air, to put it in perspective. My tail wind at that point was 40 kts based on the difference between TAS and ground speed via CloudAhoy. So 40 knots horizontal and 30 kts downward. Fun stuff.

    This effect is due to a laminar mountain wave flowing over the rockies and extending in a repeating sine wave over the plains. But underneath that laminar flow you get a strong rotor effect which can do severe damage to aircraft.

    Around the same time a TBM reported severe turbulence within KAPA’s (Centennial’s) Delta which is quite far east from the foothills. As I approached KAPA the turbulence became very intermittent but I can see how a rotor might have reached down into KAPAs Delta and knocked that TBM around given what I had experienced from 13,700 down to 11,000.

    I’ve done a fair amount of reading about mountain wave, but that doesn’t prepare you for real world experience – and unfortunately in my case the knowledge didn’t bed down enough for me to slow down to maneuvering speed on my descent, which of course I’m really beating myself up over.

    My takeaways are as follows:

    • Avoid the leeward side of the Rockies in mountain wave conditions.
    • If you absolutely have to be in the lee of the rockies in these conditions (I can’t imagine why – maybe research, maybe a mistake), give yourself plenty of altitude, slow down to whatever your POH says is turbulent air penetration speed, and lock everything down. You’re going to get beaten up. I was rolled a few times fairly aggressively too – to about 45 degrees before I caught it each time. Very quick rolls.
    • Absolutely do not head westward towards the lee side of the slope with ascending terrain, even if you have a turbo. You’re liable to not notice your reducing AGL altitude and if you encounter a severe downdraft you could easily get smacked into the mountain. We recently lost a CAP piston single in the lee of one of the slopes. It looks like they entered the lee at only 500 AGL with 35 kts over the ridge.
    • This time of year in fall/winter/spring we frequenty see mountain waves, so I’ll either avoid crossing the high Rockies, or try to time it when winds aloft are less than 20 kts which sometimes happens in the mornings.

    Here’s a video of the telemetry imported into CloudAhoy which provides a simulation of the flight. I’ve added a narration. Clearly this is designed for interested aviators and not the YouTube crowd. 🙂

    Further reading:

  • Briefing Something Mission Critical When Failure Is Not An Option

    I started my career in IT operations moving to London and working on initially Coca Cola’s infrastructure, and then moving into investment banking and working on trade floor infrastructure. This was after being based in South Africa and doing quite a lot of work for DeBeers on e.g. the most productive diamond mine in the world at the time. I absolutely loved working on mission critical systems. Love the rush. Loved at 4am being one of a small team doing a complex deployment on a sometimes multi-billion dollar business, where failure was not an option.

    So failure not being an option is something that has fascinated me. I’m an instrument rated pilot. I fly a Cessna 206 in what in aviation is referred to as single pilot IFR – meaning that I’m flying on an instrument flight plan in bad weather i.e. in the clouds, and there’s only one of me in the cockpit. It’s some of the most demanding flying one can do in terms of cognitive workload. And of course failure is absolutely not an option.

    In instrument flying we use approach plates to conduct an instrument approach. The approach phase is one of the busiest times in the cockpit along with doing an instrument departure procedure. The approach plate is a one page easy to read summary of several critical pieces of information, like what my frequencies are, what my critical altitudes are, what path I’m flying and what to do if, when I get to the runway, the clouds are so low that I can’t see it and I have to execute a missed approach.

    There’s a technique we use called “briefing the approach” which is really designed for a two pilot environment, but us single pilot IFR guys use the same technique. You’ll do a read-through of the instrument approach plate before you actually start flying the approach, and you’ll do it at a time when you’re not as busy as you’re about to be. Starting at the top you’ll read through items like frequencies, navigational fixes, minimum altitudes, missed procedure and so on.

    I can’t share the approach plates I use because they’re published by Jeppesen and are copyrighted, but here is the FAA plate for the approach on Orcas Island where I live, that I fly quite a lot.

    The briefing from the top of the plate will go something like this:

    • OK so the approach we’re doing is the RNAV runway 16 at Orcas Island
    • Our approach course is 193 degrees
    • Airport elevation and touchdown zone elevation are 35 feet
    • AWOS frequency [for the weather] is 135.425 and we’ve already given it a listen and have the weather
    • Whidbey approach frequency is 118.2 and we’re already talking to them and we’ll be switching to Victoria on 132.7 soon and I have that dialed in
    • Once we’re on the approach they’ll switch us to the advisory frequency which is 128.25 and I have that ready.
    • Our final approach fix is CALBI which we’ll cross at 1900 or above.
    • We’re doing an LP approach and our minimums are 340ft and I’ve got that dialed in.
    • Our missed procedure is…..

    [Side note: If you’re instrument rated, don’t nit pick my briefing here. It’s designed to be parsable by non-pilots and is for illustrative purposes only.]

    And so it goes. It’s a relatively quick process and it’s more concise then I’ve described here because there’s some jargon used.

    As a single pilot flying IFR (instrument flight rules) you literally say the briefing out loud to yourself before flying the approach. This technique is one of the many that aviation has come up with to mitigate the weakest link in aviation, which is the human being at the controls. In general aviation (non scheduled flights like I fly) around 78% of accidents are caused by the human. And so aviation spends a lot of time coming up with techniques like this to mitigate the risk of the human making a mistake.

    Surgery has a similar technique called the timeout. The timeout is essentially the surgical team “briefing” the procedure. This includes basic items like the patient identity and the surgical site. It’s to prevent fundamental errors like wrong site, wrong person or wrong procedure errors.

    I’ve incorporated this concept into my business (Defiant Inc which makes Wordfence) and my personal life. If I’m about to hitch a 10,000 pound trailer to my truck, take it into a ferry and go do a bunch of stuff on the mainland, I’ll brief it with my wife to make sure we haven’t missed anything. Yeah – I know- that makes me sound like a bureaucratic pain in the ass, but we’ll just spend a couple minutes talking through what we’re about to do and whether we have everything we need.

    If we’re about to do something complex or mission critical at work I’ll brief it with the team. They don’t realize what I’m doing most of the time, but I’ll describe it as “briefing” the thing and we’ll just talk through what may seem to many on the call as some obvious details. Sometimes something will fall out that we need to address, or we’ll go deep on an issue.

    When you’re doing something that is mission critical where failure is not an option, consider briefing the thing. Just talk through what you’re about to do and the pertinent details. It’s really just a mental shift where you’re fully dedicating your mental capacity to thinking about what you’re about to do and the details, rather than assuming you’ve got it all figured out. And then when you get busy or are under pressure, you’ll have all the data and procedures stored in your short term mental cache ready to go.

    Footnote: “Failure is not an option” doesn’t originate from NASA. It actually comes from the film Apollo 13. But from what I’ve seen, aviation has enthusiastically adopted the phrase.

  • And Back Again.

    Just a test first blog post. Spent the afternoon setting up my blog and migrating the data from an archive I had stashed away. Managed to get posts going all the way back to 2007. Woohoo I have a blog again! And wow, things haven’t changed much at all. Still reverse proxying nginx to apache with letsencrypt and an htaccess file. The Web is very much still the same.

     

  • Wordfence Reviews – Find them on WordPress.org

    I’m posting this to help our customers find objective Wordfence reviews. If you are short on time and would like to view objective, reliable reviews for Wordfence that are moderated by volunteer WordPress moderators to remove spam, you can visit the Wordfence plugin review page on WordPress.org.

    I’m the founder and CEO of Wordfence. We make the most popular firewall and malware scanner for WordPress. We also offer a site cleaning and site security audit service.

    If you do a google search for ‘wordfence reviews’ or ‘wordfence review’, it is quite likely that the first page of results may contain a competitor who has posted something that appears to be an ‘objective’ wordfence review on his personal blog. That was posted in 2012 and I think it’s quite unreasonable for us to expect a direct competitor to have anything good to say about us, which he didn’t. 🙂

    The hosting landscape is complex and there are many affiliate and business partnerships between security companies, hosting companies etc. It’s like spaghetti. For example, one major security company is owned by the founders of a huge hosting conglomerate. In another case, a major security company was bought by one of the largest hosting companies but still trades under it’s own brand. And then there are affiliate schemes or ‘kickbacks’ that motivate bloggers to write great reviews for one security provider and bad reviews for another.

    The bottom line is that it can be challenging to find objective reviews for Wordfence. The good news is that there is a source that you can rely on, it is 100% objective and it is controlled by a group of volunteer moderators who are awesome and who do a great job of removing spam and making sure that all reviews stay objective.

    Your most reliable and objective source of Wordfence reviews is the WordPress plugin repository.

    The plugin repository is where we distribute Wordfence. It is an open source collection of plugins available for WordPress. Anyone who uses a plugin and has signed up for a wordpress.org account can post a review on this page.

    The moderators who filter out spam are volunteers and they do a really great job of making sure vendors don’t ‘stuff’ good reviews into their product. They also make sure that competitors don’t come in and spam reviews to make someone else look bad.

    If you have a support issue related to Wordfence, I would also encourage you to search our forums for a solution or post there if you need help. We have dedicated team members who reply to our free customers in the forums. Our awesome support is why we have so many great reviews and a 5 star rating.

    Wordfence reviews

    Wordfence also has premium support for our paid customers which you can find at support.wordfence.com.

    I hope this blog post has cleared up any confusion on where to find objective and reliable Wordfence reviews.

    Regards,

    Mark Maunder – Wordfence founder/ceo.

    PS: Reviews like this one below from one of our customers really made my day. It also made Phil’s day. Phil is the security analyst who helped Mike recover from a hacked site. This review was posted today. Mike is one of many happy customers who have used Wordfence to help stay secure.

    Wordfence review

     

  • Why the term "cyber" is cool.

    In 1986 William Gibson published Neuromancer, his masterpiece. In it he coined the term ‘cyberspace’. For many of us it described the world of ‘computers’ at the time. It captured the experience of disappearing into code.

    Later ‘cyberspace’ was an uncannily accurate metaphor for getting online and disappearing into, for me, the telephone networks through phone phreaking and later the Internet and the text based online communities like IRC, NNTP, telnet based MUDs and so on.

    The term ‘cyber’ is now mocked by those in information security as something uncool. I’m not sure why but I think it’s because the term has been coopted by companies trying to sell products in cyber security.

    For me and I think many others, ‘cyber’ and ‘cyberspace’ are precious reminders of the beauty of Gibson’s writing and how he accidentally captured the reality that was to follow in a beautiful metaphor.

    This is my favorite passage from Neuromancer as Case is cured and once again is able to access cyberspace. What I love about this passage is that it captures the sense of longing many of us have when we exist in the real world and the sense of belonging when we’re online.

    And in the bloodlit dark behind his eyes, silver phosphenes
    boiling in from the edge of space, hypnagogic images jerking
    past like film compiled from random frames.  Symbols, figures,
    faces, a blurred, fragmented mandala of visual information.
      Please, he prayed, _now --_
      A gray disk, the color of Chiba sky.
      _Now --_
      Disk beginning to rotate, faster, becoming a sphere of paler
    gray.  Expanding --
      And flowed, flowered for him, fluid neon origami trick, the
    unfolding of his distanceless home, his country, transparent
    3D chessboard extending to infinity.  Inner eye opening to the
    stepped scarlet pyramid of the Eastern Seaboard Fission Au-
    thority burning beyond the green cubes of Mitsubishi Bank of
    America, and high and very far away he saw the spiral arms
    of military systems, forever beyond his reach.
      And somewhere he was laughing, in a white-painted loft,
    distant fingers caressing the deck, tears of release streaking his
    face.
  • Working On-Site Considered Harmful

    It doesn’t make sense for knowledge workers to be on-site anymore.

    Working on-site comes with a significant cost. Quiet time is a precious commodity if you’re in any kind of cerebral role –  and it’s rare in most office environments. Then there’s the distraction of commuting to work, commuting back, people coming and going, the office socializer who wants to chat and so on.

    Working remotely has many advantages. If you’re using Slack, you don’t have a situation where the dominant person in the room gets to drown out other opinions. It makes communication more democratic and a side effect is that communication becomes much more relaxed. Less conflict == more fun and getting more done.

    When interaction happens via git and a bug tracker in the form of entering and updating issues and pull requests, it keeps things moving forward without the unstructured chaos that in-person communication can create. SaaS for remote workers makes communication more structured.

    It surprises me that so many companies in the software space are still hiring on-site workers and developers in particular. I suspect it’s for two reasons:

    Firstly, managers or execs think a major part of their contribution is to “oversee” their team. This comes from a kind of personal insecurity caused by them not being able to contribute in other areas – frequently because they’re non-technical, so they need to “manage” to contribute. This is solved by hiring execs or managers who are competent in their own right – and in a tech company they need to be hands-on technical and current in their skills. I’ve met too many managers who just “manage” and mention their MIT degree and tell coding war stories.

    Secondly, I think a reason companies want to hire on-site workers is a lack of trust. They don’t think it’s possible to hire people who can be left alone to create amazing things. They think the team has to be put in a room and monitored at all times. This has evolved into persuading them to stay in the room by bringing chefs and masseuses into the office.

    I think over the next 10 years we will see the first Google’s and Amazon’s emerge with 100% remote workers. They will create a new normal for tech companies to go remote. That will cause a massive exodus from urban centers. It’s going to have a huge impact on property prices and rentals and a significant impact on the landscape. Cities like Seattle, which is overcrowded with Amazon workers will see profound changes.

    Fifteen years from now we’ll look back and giggle at how we used to crowd smart people into little boxes with bright fluorescent lighting so that we could watch them while they did work they can do from anywhere.

    We’re hiring at Wordfence. All our roles are remote. We’re a team of 9 full-timers and we have 7 positions currently open (the forensics role is X3). If you’re the best in the world at what you do, are passionate about information security and you’d like to regain your freedom, we’d love to hear from you!

  • The Longer Term Effects of the Paris Tragedy

    Having recently lived in France for a year, my heart goes out to the French people. I lived in South Western France, but fell in love with Paris as a city of art, philosophy, history and music. That it was targeted with such violence last night is a travesty of epic proportions.

    London Bridge
    London Bridge lit up with the colors of the French Flag tonight.

    At this time there are 129 deaths and 352 injured according to Le Monde.

    I’d like to spend a few minutes thinking about the longer term effects of what just happened in Paris. My background if you don’t know me is: I’m a CEO of a cyber security company, I’m a software engineer and I’m interested in public policy.

    2,977 victims died in the World Trade Center attacks on September 11th 2001. The attacks had a profound effect on public policy and foreign policy world-wide. The result was a US led war in Afghanistan and a further war with Iraq. The cost and effect of these wars continue to this day, 14 years later.

    The WTC attacks also led to the Patriot Act and a huge increase in surveillance by the United States and intelligence partners. The intelligence partners are the “Five Eyes”  which include the USA, United Kingdom, Canada, Australia and New Zealand. The Patriot act was the tip of the iceberg and since the Snowden revelations we have now learned the depth and breadth of the increase in intelligence gathering and surveillance post 9/11.

    The Oriental Pearl Tower in Shanghai showing French colors tonight.
    The Oriental Pearl Tower in Shanghai showing French colors tonight.

    The impact of the WTC attacks can, today, in my opinion, be compared to the impact of the Pearl Harbor attack in the way it changed US foreign policy and public policy. The day after the Pearl Harbor attack, the US declared war on Japan and Roosevelt and later Truman demanded the ‘unconditional surrender’ of Japan as the only acceptable end to the conflict.

    More recently, post 9/11 in the United States and world-wide the public appetite for conflict had started to taper off starting in 2008 with the Obama campaign that ran on a platform of exiting Iraq.

    Added to this there was a tapering in the public appetite and tolerance of surveillance with the Manning leaks published on Wikileaks in 2010 and the Snowden revelations in 2013.

    The number of casualties in Paris yesterday are not as high as Pearl Harbor or 9/11, but we live in a post 9/11 World where we already have an increase in conflict and surveillance. The public also has an increased sensitivity to these kinds of attacks.

    The Brandenburg Gate in Berlin
    The Brandenburg Gate in Berlin

    In my view, the Paris attacks will bring us back to the world-wide climate we encountered immediately post 9/11. It will ensure that France enters any war it hopes will reduce the threat of domestic terror and France will go beyond that. France will actively, as the USA did, seek retribution for the attacks yesterday. Manuel Valls (France’s Prime Minister – the equivalent of a Chief Operating Officer) said today that “We must annihilate the enemies of the Republic”, which sets the tone of the response going forward.

    If this had happened in the absence of 9/11, the French response would have been severe, but would not necessarily have been backed by a long term global response. Because this is post 9/11 and because it refreshes the global memory of the impact of terrorism, this will have a much wider influence on global governments and their public and foreign policy.

    The Sydney Opera House Tonight
    The Sydney Opera House Tonight

    I expect that there is a show of solidarity with France that goes beyond countries displaying the French flag on public and private buildings last night and tonight.

    France will likely be brought closer into the Five Eyes intelligence sharing arrangement which has so far excluded all European countries with the exception of the United Kingdom. [And in fact had an adversarial relationship with countries like Germany]

    In response to Charlie Hebdo, France passed a new surveillance law in May that allows the monitoring of phone calls and emails without the authorization of a judge. The law also requires ISP’s to install devices to sniff Internet traffic and make that traffic available to French intelligence services. The law is essentially the USA Patriot Act without the need for a FISA court to authorize surveillance.

    San Francisco City Hall
    San Francisco City Hall

    The tragedy yesterday will likely provide the impetus to pass additional laws that cover anything that legislation earlier this year may have missed. That earlier law doesn’t appear to have missed much.

    I hold no strong opinions either way on public surveillance. That we appear to need surveillance, I consider tragic. I’d also prefer to not have secrets, but a thought experiment I came up with a few years ago seems to indicate that the need for secrets is inevitable.

    My interest is in understanding what will happen next, and we appear to be headed into a deeper spiral of surveillance, conflict and secrecy. I’d prefer that things were different, but I’m angry too.