<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[Elliot C Smith]]></title><description><![CDATA[Thoughts on building product, teams, software and AI.]]></description><link>https://www.elliotcsmith.com/</link><generator>Ghost 5.79</generator><lastBuildDate>Fri, 23 Feb 2024 03:12:24 GMT</lastBuildDate><atom:link href="https://www.elliotcsmith.com/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[Edits and Egos]]></title><description><![CDATA[<p>Asking for feedback on your work is an important part of improving. Sometimes, receiving that feedback can feel extremely personal. Whether it&#x2019;s writing, slides, code or anything else we can tie ourselves up in our creations. Learning to separate ourselves from our work is an important step, one</p>]]></description><link>https://www.elliotcsmith.com/edits-and-egos/</link><guid isPermaLink="false">65b6b5d3b1a9a200013ff302</guid><dc:creator><![CDATA[Elliot Smith]]></dc:creator><pubDate>Sun, 28 Jan 2024 20:16:43 GMT</pubDate><content:encoded><![CDATA[<p>Asking for feedback on your work is an important part of improving. Sometimes, receiving that feedback can feel extremely personal. Whether it&#x2019;s writing, slides, code or anything else we can tie ourselves up in our creations. Learning to separate ourselves from our work is an important step, one I&#x2019;ll admit I sometimes forget.</p><p>Sometimes I&#x2019;ll ask for feedback knowing it will make the work better but end up feeling insulted when people provide edits. Objectively I know my first draft is never perfect but somehow I get it in my head that a missing comma is tantamount to me being a failure.&#xA0;</p><p>Once the moment passes, those edits almost always make the work better. My PhD thesis had a whopping 35 pages of recommended edits over its 200 odd pages. If I had tied each one of those to my ego it would have never been finished.</p><p><strong>Why it matters:</strong> In many contexts, the things we create should have a high bar for quality. Work for publication, code we write for our job, ideas we present to others. Baked into that work are ideas and meaning, we&#x2019;re responsible for ensuring those ideas reach the world in a refined form.</p><ul><li>In a professional context it is probably the simplest to understand. We&#x2019;re being paid to produce our work, getting feedback on it helps it improve and helps our customers or colleagues receive better output.</li><li>When we&#x2019;re making things ourselves the situation is more difficult. We cannot hide behind the banner of our corporation, in many ways our work does reflect us. Even still, feedback and editing are a key part of that process. The world sees our final draft, not the count of missing commas along the way.</li></ul><p><strong>How to detach yourself from your work:</strong> Remember is that the beauty is often in some intangible core of your work, not the work itself. I am a terrible painter, I could try to capture a beautiful scene on a canvas and it would likely look terrible. This doesn&#x2019;t mean the original scene was ugly, only that I didn&#x2019;t manage to capture it well. Think of editing not as a reflection on the core, but a refinement in capturing it.</p><ul><li>I still get a little frustrated and defensive when I get feedback. That in the moment response is tough to change in a hurry. What&#x2019;s important is what you do next. Take the time to consider feedback before acting (or responding).</li><li>Like anything else, practice will help. Ask for feedback more often than you think you need it.</li><li>Ask for the right type of feedback. People should be able to point out that a sentence doesn&#x2019;t land or some code looks messy without giving you an alternative. In the end it is still your work to change and produce.</li></ul><p><strong>Choosing when to skip the edit:</strong> Sometimes editing can become a barrier to putting out work. There&#x2019;s no doubt that editing adds time. Depending on the context sometimes the more important thing to do is just publish.</p><ul><li>These posts for example. They usually get a light edit with something like Hemingway Editor but don&#x2019;t go out to other people before I post them.</li><li>Keeping the number of editors low is also important. Ask enough people and you&#x2019;ll eventually round off all the edges in your work and end up with something horribly generic.</li></ul><p>Almost everyone will find something to mention if you ask for a review. They don&#x2019;t want to seem silly. You also have full license to ignore any suggestions you receive. Including those in this post.</p>]]></content:encoded></item><item><title><![CDATA[Performance vs Diagnostic Metrics]]></title><description><![CDATA[<p>The value of any tracked metric is in the actions it inspires. Numbers on a wall or a slide deck don&#x2019;t mean anything on their own. They&#x2019;re signposts and signals to help guide behaviour. Making the metrics go up or down can be very rewarding. Some</p>]]></description><link>https://www.elliotcsmith.com/performance-vs-diagnostic-metrics/</link><guid isPermaLink="false">65aae72cd89a6f0001a9a06a</guid><dc:creator><![CDATA[Elliot Smith]]></dc:creator><pubDate>Fri, 19 Jan 2024 21:19:10 GMT</pubDate><content:encoded><![CDATA[<p>The value of any tracked metric is in the actions it inspires. Numbers on a wall or a slide deck don&#x2019;t mean anything on their own. They&#x2019;re signposts and signals to help guide behaviour. Making the metrics go up or down can be very rewarding. Some metrics however, aren&#x2019;t there to be as big or small as possible.&#xA0;</p><p>These metrics I&#x2019;ll call Diagnostic Metrics. We choose Diagnostic Metrics to be flags, signalling that something is broken. They often have a threshold (or two for a range) that if surpassed, you act. If the metric is in its healthy zone, the best thing you can do is leave it alone.</p><p><strong>Why it matters: </strong>There&#x2019;s a temptation to believe that every metric can be optimised forever. Sales can keep going up, cost of acquisition we try to lower as much as possible. For performance metrics, that makes sense. Trying to optimise diagnostic metrics however, is a waste of effort. If we&#x2019;re happy for the CPU load on our server to average below 60%, trying to drive it down to 0% doesn&#x2019;t really mean anything.</p><ul><li>Diagnostic metrics exist at a personal level as well. Not to mistake the term diagnostic here but many blood markers have a similar healthy range. If you&#x2019;re in the healthy band for iron levels then there&#x2019;s no change needed.</li><li>Diagnostic Metrics won&#x2019;t give you an immediate fix. They are there to provide a diagnosis, an indication that something is wrong. They&#x2019;re not there to give you the cure.</li><li>Take the average CPU load as an example. If the server spends a day at 95% load there&#x2019;s no load level we can pull to drop it back down (ignoring spending more on CPU). The alert sparks an investigation, once we find the cause, we make the change and the alarm disappears.</li></ul><p><strong>How to classify a metric: </strong>You may be thinking, is my metric a Diagnostic Metric or a Performance Metric? The answer, perhaps frustratingly, is that it depends. Any metric or measure can be Diagnostic or Performance depending on how we use it.</p><ul><li>Let&#x2019;s say this quarter we&#x2019;re focused on software performance. We use Response Time as a guide. We&#x2019;ve not measured it before for this endpoint but have a feeling based on customer feedback that its too slow. For the next quarter, Response Time is a performance metric. We work on it to see how low it can go. We make some progress, complaints stop and we&#x2019;re happy.</li><li>Next quarter rolls around and we already have our Response Time metric on the dashboard. Given complaints are low enough we don&#x2019;t want to dedicate more time to it but we certainly don&#x2019;t want it to regress. Now, response time has become a Diagnostic Metric. We pick a threshold and as long as we don&#x2019;t drift above that level, we&#x2019;re happy focusing effort elsewhere.</li><li>A good time to switch from Performance to Diagnostic is when we see diminishing returns on new work. We can drive up our click through rate for a quarter or two but eventually we&#x2019;ll hit a ceiling. We then decide if a much smaller shift is worth as much effort, or if resources should be better spent elsewhere.</li></ul>]]></content:encoded></item><item><title><![CDATA[Chat is poor UX for most users.]]></title><description><![CDATA[<p>Large Language Models (LLMs) like ChatGPT have rapidly become commonplace tools. They&#x2019;re the most transparent AI application I can think of, users know they&#x2019;re interacting with an AI. Despite that, I see chat as a complex and difficult user experience for most applications. It&#x2019;s</p>]]></description><link>https://www.elliotcsmith.com/chat-is-poor-ux-for-most-users/</link><guid isPermaLink="false">65a3bd63bfb3870001cedf79</guid><dc:creator><![CDATA[Elliot Smith]]></dc:creator><pubDate>Sun, 14 Jan 2024 10:55:18 GMT</pubDate><content:encoded><![CDATA[<p>Large Language Models (LLMs) like ChatGPT have rapidly become commonplace tools. They&#x2019;re the most transparent AI application I can think of, users know they&#x2019;re interacting with an AI. Despite that, I see chat as a complex and difficult user experience for most applications. It&#x2019;s similar to interacting with your computer via the command line, powerful but at the cost of a high bar. The dominance of OSX as a user friendly OS hints that chatting to AI will become reserved for power users.</p><p><strong>Why it matters: C</strong>hat offers breadth as an interface, but it has thus far proved to be somewhat unnatural. Guides on prompt engineering show these models need help to perform at their peak. To bring the value of these models to everyone, we need to wrap them in accessible and intuitive interfaces.</p><ul><li>Chat is unlikely to disappear but it&apos;ll likely be for power users, much like the command line is today.</li><li>Graphical user interfaces have replaced the command line for day to day users. Teams of user experience designers work with end users to try and remove friction.</li></ul><p>I had a moment recently that triggered this post. There&apos;s a new feature on YouTube that plays an effect on the like button when someone says &#x201C;hit the like button&#x201D;. This was a small moment but likely involved several ML models. One translating speech to text, another looking for phrases about the like button. As far as I can tell, this all happens behind the scenes with no user intervention.&#xA0;</p><p><strong>What&#x2019;s holding it back: </strong>Designing user experience is hard. It is a specialty in itself. Chat is wide open and flexible which has made it an attractive default. Despite that, our job is to remove friction in giving users what they&#x2019;re seeking. The current interface for many LLMs is setting system level prompts. This is a behaviour which doesn&#x2019;t lend itself to moving away from &#x2018;chat&#x2019; at the primary interface.</p><ul><li>Providers like OpenAI will likely start to introduce other ways to guide models. Other models for image generation have some options with things like temperature.</li><li>The challenge building on top of off-the-shelf models is that adding other levers requires more training. Without that level of access to a model, chat becomes the only option.</li><li>It&#x2019;s also worth noting that even the best of today&#x2019;s generative AI models have some serious shortcomings. Hallucination, regurgitation of copyrighted content and safety are still being worked on. Where they land and what that means for the future of these models is yet to be determined.</li></ul><p>Ultimately, users buy products to solve problems for them. We need to make sure the products we build are solving problems with as little friction as possible. Explaining a problem, correcting mistakes, and chanting secret prompts won&apos;t work long term. We&#x2019;re still early in the world of UX for AI so I wont call it a loss at this stage. Five years from now we&apos;ll likely look back on chat as the primary UX the same way we look at the command line today.</p>]]></content:encoded></item><item><title><![CDATA[How to avoid picking terrible metrics]]></title><description><![CDATA[<p>The only sure fire way I know to change something, is to start by measuring it. That&#x2019;s true at a personal level for things like a 5k time as well as in a professional setting for SMART goals, OKRs or whatever variant is currently in fashion. While wanting</p>]]></description><link>https://www.elliotcsmith.com/how-to-avoid-picking-terrible-metrics/</link><guid isPermaLink="false">6599da2dbfb3870001cedf35</guid><dc:creator><![CDATA[Elliot Smith]]></dc:creator><pubDate>Sat, 06 Jan 2024 23:18:34 GMT</pubDate><content:encoded><![CDATA[<p>The only sure fire way I know to change something, is to start by measuring it. That&#x2019;s true at a personal level for things like a 5k time as well as in a professional setting for SMART goals, OKRs or whatever variant is currently in fashion. While wanting to measure things is easy, picking what to measure is not. You can easily spend months finding the perfect correlation to success but if your metrics are changing every quarter, this is almost certainly a waste of time.</p><p>There are three general rules of thumb I use to judge a metric. They all come down to the lifetime of that metric. Some measures are just fine for a quarter but would steer us off track if followed for a year. Conversely some metrics make sense when thinking about a ten year plan but are meaningless on a day to day scale.</p><p><strong>How to avoid feeling powerless: </strong>Sometimes there is a really obvious metric. Something like &#x2018;Raise our EBITDA to $2M by the end of the calendar year&#x2019;. At face value that is a pretty good metric. It&#x2019;s precise, well defined and has a timeline. If that was an annual goal it&#x2019;d be ideal. If you&#x2019;re in a big company however, you may have a good portion of the company wondering how on earth they can do anything today to make that happen. Typically called lagging metrics, these are the outcome, they&#x2019;re great for targets but not great for in the moment action. To correct for them, we need leading indicators.</p><ul><li>For this metric within a sales team, we might look at something like new cold calls or follow up meetings. That&#x2019;s a number that an individual can optimise on a daily basis that should help move towards the long term goal.</li><li>For other teams it might be completing a smaller project. Maybe there are customers waiting for a new paid feature, getting that feature shipped might be close enough to track. If that project is huge, the tickets or tasks within it might work too.</li><li>OKRs try to get this done with lower objectives feeding into higher ones. That can help but only if there is also a shift from lagging to leading as you move downwards through objectives.</li></ul><p><strong>How to avoid optimising the wrong thing: </strong>Track leading metrics for too long and you&#x2019;ll start to see them lose meaning. Anything you measure runs the risk of eventually losing its meaning and becoming a goal in itself. Take the sales cold calls objective above, if that becomes the primary incentive for a sales team you may start to see meaningless calls being made just to hit a quota. In order to avoid your metrics becoming pointless you can try one of two things. One, let your metrics expire. Two, pair quantity metrics with quality metrics.</p><ul><li>When you pick leading indicators it&#x2019;s fine to pick metrics with only a loose correlation. They&#x2019;re around to be day to day sign posts that things are moving in the right direction. As soon as they become more than that they lose their value.</li><li>Given these metrics are often loosely correlated and somewhat arbitrary, one method to avoid them becoming meaningless is to let them expire. If there is a similar, but not identical measure of leading success each quarter it will be much harder for anyone to game the numbers.</li><li>Be warned though that changing the metrics too often can make things feel chaotic. It&#x2019;s a balance of ensuring metrics are around long enough to let people optimise but not too long to have people over-optimise.</li><li>Pairing metrics means finding a counter balance. For cold calls that might be cold-call to meeting conversion rate. The goal here is to ensure that any increase in the volume of calls isn&#x2019;t coming at the expense of the quality.</li><li>If you take this route it&#x2019;s important to make sure both metrics are equally valued. It&#x2019;s easy to have a quality metric that becomes an afterthought if incentives are still aligned solely to quantity.</li></ul><p><strong>How to avoid measurement becoming a time sink: </strong>Sometimes it becomes tempting to try and find the perfect metric, or to find enough metrics to get perfect visibility over every aspect of what you&#x2019;re trying to optimise. Collecting, collating and displaying metrics can be a full time job. It&#x2019;s important to make sure that work doesn&#x2019;t become a distraction. At the end of the day, the metrics, dashboards and goals you&#x2019;re setting are signposts for the value, not the value itself.</p><ul><li>A dashboard with a thousand metrics will tell any story you like. You can find evidence for or against your gut feeling. That comfort is a trap sometimes. Depending on your personality that might paint a far too optimistic picture, or one filled with signs of failure at every turn.</li><li>On the other end of the spectrum is the quest for the one perfect measure. The combined, weighted, maybe even AI generated signal that should drive all behaviour. Beyond being at major risk of being impossible to influence, it can create an endless workload of collecting and cleaning data just to keep the metric up to date.</li><li>Sign posts, as most metrics should be, are there for a quick glance to confirm things are moving in the right direction. Or as an early signal to stop and check what&#x2019;s taking things off course.</li></ul><p><strong>Going Deeper:</strong> Given the time of year, new metrics and new goals are a hot topic. There is far more written on this than I could consolidate here. Broadly I would suggest the following resources as decent further reading into how to pick metrics in a business context. There is likely a whole blog post (or more) I could also write about statistics and how to avoid making decisions on random noise, but that is for another time.</p><ul><li>Measure What Matters - <a href="https://www.whatmatters.com/the-book?ref=elliotcsmith.com"><u>https://www.whatmatters.com/the-book</u></a>&#xA0;</li><li>How to Measure Anything - <a href="https://hubbardresearch.com/publications/how-to-measure-anything-book/?ref=elliotcsmith.com"><u>https://hubbardresearch.com/publications/how-to-measure-anything-book/</u></a>&#xA0;</li><li>High Output Management - <a href="https://www.goodreads.com/book/show/324750.High_Output_Management?ref=elliotcsmith.com"><u>https://www.goodreads.com/book/show/324750.High_Output_Management</u></a>&#xA0;</li></ul>]]></content:encoded></item><item><title><![CDATA[Three things that helped my side projects drag on for years.]]></title><description><![CDATA[<p>This year alone, I&apos;ve spent months on a side project that would have once taken me a weekend. I&apos;m glad I did, because for the longest time I waited for that perfect weekend and got nothing done. So after lots of doing nothing, then a long</p>]]></description><link>https://www.elliotcsmith.com/three-things-side-projects/</link><guid isPermaLink="false">65588ad4596d710001ef8e5b</guid><category><![CDATA[programming]]></category><category><![CDATA[building]]></category><dc:creator><![CDATA[Elliot Smith]]></dc:creator><pubDate>Tue, 21 Nov 2023 00:30:26 GMT</pubDate><content:encoded><![CDATA[<p>This year alone, I&apos;ve spent months on a side project that would have once taken me a weekend. I&apos;m glad I did, because for the longest time I waited for that perfect weekend and got nothing done. So after lots of doing nothing, then a long time making progress little by little, here&apos;s some thing that helped.</p><p><strong>Set a schedule:</strong> Right now for me this is 45 minutes first thing in the morning. I get up early and try make a bit of progress. Often that progress feel frustratingly short on the day, but over weeks and months those efforts stack up. I&apos;ve found that a little daily effort is easier to sustain than trying to lump that time into one block on a weekend. If you miss one day you&apos;re 45 minutes behind, if your one weekend block is impossible due to other plans, things feel much further behind.</p><p><strong>Leave some breadcrumbs:</strong> Often, the hardest part is getting started. If you&apos;re following a schedule and only have a small window, getting right into things is even more critical. What&apos;s helped me here is leaving a jumping off point for the next time I start on a project. </p><ul><li>If its code it&apos;s a line in my git commit like: <code>NEXT: Add a form for adding a new weekly time block</code>.</li><li>For blog posts and videos I&apos;ll leave something not quite done, a half finished sentence or edit.</li><li>These breadcrumbs mean I don&apos;t have to eat into my session choosing between the many things I could do next.</li></ul><p><strong>Make it fun:</strong> If your side project feels like a total chore, it&apos;ll be hard to keep going. Every project has it&apos;s slow periods but it can help to inject one or two small things that keep it interesting. That might be a new library or tool to dive into. It helps ensure that even if the particular project is a flop, you can point to something you picked up on the way. That being said, I&apos;ve found the chance of a failed project increases the more &apos;new&apos; things I add. Boring and familiar has its place too.</p><p>Assuming your side project is in fact &apos;something on the side&apos; I would encourage you to become OK with them taking as long as they take. When I was at university, time was plentiful. I could build, study, work and do everything else I needed to.</p><p>For a while I beat myself up because I wasn&apos;t pulling all nighters to build things, later I realised thinking that way was unhealthy. These days there are other things high on my list of priorities and that&apos;s okay. I can still build, just in a way that matches the time I have available.</p>]]></content:encoded></item><item><title><![CDATA[The hardest part of prioritising product]]></title><description><![CDATA[<p>The most difficult part of building product with limited features is priortisation. The hardest part of prioritisation is staying consistent. After many years and many &apos;methods&apos; I can say that sticking to one thing long enough to see results is most of the hard work. These days I</p>]]></description><link>https://www.elliotcsmith.com/the-hardest-part-of-prioritising-product/</link><guid isPermaLink="false">653dc64dd237110001133a7a</guid><category><![CDATA[product]]></category><dc:creator><![CDATA[Elliot Smith]]></dc:creator><pubDate>Sun, 29 Oct 2023 03:33:55 GMT</pubDate><content:encoded><![CDATA[<p>The most difficult part of building product with limited features is priortisation. The hardest part of prioritisation is staying consistent. After many years and many &apos;methods&apos; I can say that sticking to one thing long enough to see results is most of the hard work. These days I am far less attached to methods and spend a lot more time trying to build consistency.</p><p><strong>Why its tempting:</strong> Some days you feel like things are stuck. You&apos;re not shipping as fast as you once did. There&apos;s a lot to juggle and it feels a bit like chaos. If you just get the setup right, things might all be simple.</p><ul><li>Assuming that you&apos;re working on something novel you&apos;re going to spend a lot of time in the unknown. Problems worth solving don&apos;t have known solutions.</li><li>If you&apos;re a maker it&apos;s hard not to have your maker hat on even when you&apos;re in management mode. You want to build systems that are optimal, ones that feel smooth and remove all the complexity.</li><li>A clean slate is extremely tempting. You built from zero to your current setup so it can feel like a second shot at that will be easier.</li></ul><p>It&apos;s easy to get stuck in a loop here. Restarting the first little chunk of a prioritisation system. Each time feels great in the beginning but it&apos;s easy to end up back where you started, this time with a slightly different configuration or process.</p><p>If you really want to ship good stuff, I&apos;d recommend smaller tweaks at a lower frequency until you hit a point where its clear you need a major revision. Advice that is just as true for a complete codebase rebuild as it is for a rebuild of process.</p><p><strong>Why it matters:</strong> Building product, either along or in a team, is a process of shipping the most good stuff as quickly as possible. Regardless of size, teams I have been a part of always have more they want to build than they can get out the door. The teams I&apos;ve seen do well are the ones that manage to pick the right things to build more often.</p><ul><li>Choosing those things is tough. Even getting them onto a list can be a challenge but lets assume we have a list of ideas that aren&apos;t totally crazy.</li><li>Ultimately we want to build all the ideas, but whether its a personal project, startup or large organisation it&apos;s nice to get signal that we&apos;re not wasting time.</li><li>Just about every company that&apos;s reached a certain size will eventually publish a blog post on how they build product. Inevitably this will be some combo of team make up, prioritisation methods and (hopefully) feedback loops to know what got built is working.</li></ul><p>In my early days as a developer and even as I started to manage teams, it was tempting to see some new method and be tempted to shift process. Every blog post paints a glossy picture of smooth running product teams.</p><p>Changing systems has two costs. First is an adjustment period, new systems take time to set up. Second is a false reset of expectations, we buy ourselves false time to let the new system get up to speed. The harder yet admittedly more boring option is to keep things going.</p><p><strong>Advice to my former self:</strong> I am not here to say never change systems. Going from gut feel to some sort of ranking system will probably help. What I am advocating for is patience. Small tweaks to a long run process can add up far more significantly than big swings. It also helps in not causing blog-post-methodology fatigue throughout the rest of the team.</p><ul><li>When I am building things, on the tools building things, there&apos;s no better feeling than being in a groove. Maintaining process or making small tweaks helps maximise that feeling.</li><li>It is fairly unlikely that anyone is going to write blog posts about their crappy systems. We don&apos;t hear about teams warming up to new methods, failed systems or down sides.</li></ul><p>When your current system starts to really fail, you&apos;re going to know. Post it notes won&apos;t last long when your team grows. Customers will eventually bring in more requests than your system can handle. When you hit these points, revision is possible.</p><p>Changing systems because you feel like things aren&apos;t &apos;perfect&apos; is not the way to win. Building things is hard and requires hard work, no methodology out there is going to change that.</p>]]></content:encoded></item><item><title><![CDATA[Write more crappy blog posts]]></title><description><![CDATA[<p>If you hold off hitting publish until a great idea comes along, your writing probably wont be ready to capture it.</p><p><strong>Why it matters: </strong>Every now and then you&apos;ll have a thought. One you think is novel enough to be worth sharing. If you are at all compelled</p>]]></description><link>https://www.elliotcsmith.com/write-more-crappy-blog-posts/</link><guid isPermaLink="false">6520eb3750a2640001a3da7d</guid><category><![CDATA[writing]]></category><category><![CDATA[meta]]></category><dc:creator><![CDATA[Elliot Smith]]></dc:creator><pubDate>Sat, 07 Oct 2023 05:28:05 GMT</pubDate><content:encoded><![CDATA[<p>If you hold off hitting publish until a great idea comes along, your writing probably wont be ready to capture it.</p><p><strong>Why it matters: </strong>Every now and then you&apos;ll have a thought. One you think is novel enough to be worth sharing. If you are at all compelled to write online, you might be tempted to blog about it. Having an idea you can&apos;t quite articulate can be incredibly frustrating. Much like everything else we do, the answer is more practice.</p><ul><li>Writing can be tough. There&apos;s usually no better way to realise how poorly formed your ideas are than to try and write them down.</li><li>Most of us likely write a lot in our day to day, email, texting, all sorts of communication is text. Despite this, getting a brand new idea into words can be tough.</li><li>It&apos;s easy to put off publishing smaller, simpler ideas out of worry that they&apos;re not big enough to matter. I am personally guilty of this one.</li></ul><p><strong>How to start:</strong> In the end it comes down to the most annoying kind of advice. Just do the work. To get yourself into a spot that you&apos;re ready when big ideas come along you need to practice.</p><ul><li>Writing down and publishing little ideas is the most direct way to practice. The more you do it, as long as you&apos;re deliberate, the better you&apos;ll get.</li><li>Writing but not hitting publish isn&apos;t enough. Most people put a higher bar of quality on something that&apos;s being made public. Even if nobody will ever read it.</li><li>That doesn&apos;t mean hours of painstaking editing. It&apos;s a blog after all, not a novel.</li><li>One thing I have found handy in this regard is keeping a list of &quot;Things I could write about&quot;. I pop new ideas on there when they come up and then I sit down and write.</li></ul><p>In fact, this very post came from one of those ideas. Somebody once said people blog about things they wish someone else had told them. This is probably true here.</p><p><strong>The end goal:</strong> It&apos;s important to remember that these posts are not meant to be wonderful. The goal here is to do the reps. Be ready for when the big ideas arrive.</p><ul><li>You probably will get better at writing out little ideas. If you pay attention to what you&apos;re writing, you&apos;ll spot areas where you can improve.</li><li>You&apos;ll also get faster at getting ideas out of your head and into words. Some of those words might get culled in an edit but it&apos;s a lot easier to do that when they&apos;re solid.</li></ul><p>Plus in the end, who knows, some of those little ideas might be great big ideas for someone else.</p>]]></content:encoded></item><item><title><![CDATA[Just because it's old, doesn't make it tech debt.]]></title><description><![CDATA[I spent a very long time incorrectly thinking that obsolete abstractions were tech debt.]]></description><link>https://www.elliotcsmith.com/tech-debt/</link><guid isPermaLink="false">6518bf50d935ef0001dd268d</guid><category><![CDATA[product]]></category><category><![CDATA[programming]]></category><dc:creator><![CDATA[Elliot Smith]]></dc:creator><pubDate>Sun, 01 Oct 2023 00:50:43 GMT</pubDate><content:encoded><![CDATA[<p>I spent a very long time early in my career incorrectly thinking that obsolete abstractions were tech debt.</p><p>Finding time to reduce tech debt is an important part of balancing development time. Making up for intentional previous trade offs in quality for time is tech debt. A mental model or abstraction can become unsuitable for users as the variety of use cases grows. </p><p>Replacing these abstractions should be a part of product process, not tech maintenance. Replacing abstractions without concrete use cases will lead likely to premature optimization and new abstractions which are unlikely to be correct.</p><h2 id="backstory">Backstory</h2><p>I&apos;m now leading product at a logistics technology startup with a code base roughly four years old. Recently we needed to scope an upgrade to how we store prices for point to point transport of goods. The setup we designed would map prices against two locations. These locations could be concrete places, like a warehouse, or broad locations like a suburb.</p><p>In designing this setup, we found that a number of other places in the code base had similar concepts. We made the call that this unified location model brought a lot of quality of life additions and would work to migrate the code base to use this new model where we could. After input from the development team, it turned out this would be a big job. One we initially painted as cleaning up years of tech debt.</p><p>It&apos;s easy to come into an existing project, or look back on a long running one and assume everything from the past is out of date. Having done the work it is easy to look back and pick out issues. We need to remember that what we needed yesterday might have been different from what we need today.</p><p>Along those same lines, building next year&apos;s solution today isn&apos;t the right move most of the time. We assume we know better looking backwards but I&apos;d bet a year from now, whatever we build today will have it&apos;s flaws too. Building products, especially software, is rarely a bounded process. We grow, requirements change and the world we&apos;re building in continues to shift.</p><p>Treating everything we&apos;ve done in the past as tech debt or a mistake only serves to make us miserable. Yes, tech debt is real but it should be reserved for real instances of trading off speed for completeness, not a change in complexity or requirements.</p>]]></content:encoded></item><item><title><![CDATA[Zero to Production in Rust - Book Review]]></title><description><![CDATA[<p>This week I finally finished working my way through this book. The very short summary is that its good, if you&apos;re new to Rust or want to dive a little deeper into it, check it out. <a href="https://www.zero2prod.com/index.html?ref=elliotcsmith.com">https://www.zero2prod.com/index.html</a></p><p>A few more thoughts:</p><ul><li>Working with</li></ul>]]></description><link>https://www.elliotcsmith.com/zero-to-production-in-rust-book-review/</link><guid isPermaLink="false">64e31f76d54d200001ae5c33</guid><category><![CDATA[books]]></category><category><![CDATA[programming]]></category><dc:creator><![CDATA[Elliot Smith]]></dc:creator><pubDate>Mon, 21 Aug 2023 08:45:08 GMT</pubDate><content:encoded><![CDATA[<p>This week I finally finished working my way through this book. The very short summary is that its good, if you&apos;re new to Rust or want to dive a little deeper into it, check it out. <a href="https://www.zero2prod.com/index.html?ref=elliotcsmith.com">https://www.zero2prod.com/index.html</a></p><p>A few more thoughts:</p><ul><li>Working with Rust is pretty nice. I&apos;ve worked with it quite a lot to date but it was another reminder that the tooling and package ecosystem is pretty excellent.</li><li>There was a lot of TDD + refactor loops in the book. That worked okay for me as I was chipping through it in chunks of about 30 mins first thing in the morning. I imagine if you sat down to run through the whole book, that could get annoying.</li><li>It took me about 25 hours start to finish to work through the content. I wasn&apos;t aiming for speed by any means but that might be interesting for some people when considering the book.</li><li>Some libraries had undergone a version change since the PDF I purchased was created. I took that as a bit of a personal challenge to migrate to the new library. SQLX in particular has a new offline mode which took a moment to work out. Overall though, the book and the code repo seem to be getting minor updates and if you stick to the noted packages you&apos;ll be fine.</li><li>The tracing, logging and related tools mentioned were very cool. Sets a high bar for other ecosystems.</li><li>The book is written really well, props to the author for creating some really approachable content.</li><li>The final comment I will add is that following a course feels like its teaching you a lot but having the code there can be a crutch. I finished the book and started a similar project. It was interesting how quickly I needed to head back to Google to check on syntax etc.</li></ul><p>Overall, good book. I feel more confident in Rust than before and keen to get stuck into some larger projects.</p>]]></content:encoded></item><item><title><![CDATA[Population Density]]></title><description><![CDATA[<p>I&apos;ve spent the last two weeks in Japan. Mostly in big cities like Tokyo. In being here there&apos;s something that stands out in stark contrast to my every day experience living in a small to mid sized city in Australia (Brisbane). While no place is perfect</p>]]></description><link>https://www.elliotcsmith.com/population-density/</link><guid isPermaLink="false">64ae6ae94c6c1900018a0101</guid><category><![CDATA[society]]></category><dc:creator><![CDATA[Elliot Smith]]></dc:creator><pubDate>Wed, 12 Jul 2023 09:22:54 GMT</pubDate><content:encoded><![CDATA[<p>I&apos;ve spent the last two weeks in Japan. Mostly in big cities like Tokyo. In being here there&apos;s something that stands out in stark contrast to my every day experience living in a small to mid sized city in Australia (Brisbane). While no place is perfect I have come to feel a strong appreciation for the power of high population density coupled with good urban design.</p><p>Similar cities exist all over the world but lets stick with Tokyo as an example for the moment given it&apos;s where I was most recently.</p><p>There are a lot of people in Tokyo. Over 13 million by last count with another 2-3 million commuting in and out during the day. Despite this, Tokyo didn&apos;t feel packed. You could walk in the street, cars weren&apos;t bumper to bumper. This is likely mostly due to the world class public transportation. The networks of trains and subways keep the city moving. Missing your subway is a handful of minuets delay at most. Better yet, the cost to get almost anywhere across the city is low, equivilent to a few Australian dollars at most.</p><p>This was the first marvel of population density. Systems like this cannot exist at a small scale. You need a certain volume of people to run through a station every day to be able to offer prices that low. You need the volume to justify the outlay for tunnels, new lines and new stations. All of this is far more difficult when density is low. Where I live in Australia a large portion of the &quot;city&quot; population lives in the suburbs. Getting a train network to every up and coming suburb would be a massive drain on government funds. Pack the same population into a much smaller area and the return on investment is much higher.</p><p>The second thing that struck me was the number of small businesses. Tokyo is full of office buildings, but more so its full of small resteraunts, cafes and peolpe delivering lunch from the back of a van. Global pandemics aside, the chance of running a successful resteraunt growth with local population.</p><p>Hell, even a bad one probably lasts longer just based on having more peolpe willing to try things once before never coming back.</p><p>I don&apos;t think anyone could ever eat or drink everywhere in Tokyo. By the time you&apos;d made it through a new lunch spot every day, there would be new ones opened to add to the list.</p><p>Places to eat and drink were high on my list as a tourist but this is no doubt true for business of just about any kind. When you start a business in a city of 15 million, the need to fret over global expansion plans is not a day one concern. In Australia on the other hand, it&apos;s top of mind.</p><p>I have no doubt that brands start, grow and prosper without every leaving the city. That gives a lot of people the opportunity at a lot of success. Something I think we need more of. It also creates the need for a vast support network.</p><p>No matter where you look in a big city there will be peolpe working to keep it running. Cleaning, repairing, constructing and supporting. These jobs multiple with population. If every hosue needs lights and air conditioning, there&apos;s a lot of opportunity when you put 3000 people in every square kilometer.</p><p>The final thing that struck me is how abruptly the city flipped to greenery. Catch a train in any direction for half an hour and high rises are replaced with mountains and trees. There are plenty of people living outside the city as well but high population density needn&apos;t be at the expense of greenery.</p><p>I will say this all hinges on doing things well. I&apos;ve been to other large cities where things aren&apos;t as smooth. The low crime rates in Tokyo don&apos;t just happen. The city planning and city management to make it run this way are huge factors alongside density and I give those credit.</p><p>What I felt though was the potential that comes from brining a lot of humans together. Maybe that can exist in other ways, it probably does a little online but it was powerful feeling it first hand.</p><p>I am not setting my sights on moving to Tokyo but this has made me viscerally reflect on how much good can come from building denser, taller cities and bringing people together.</p>]]></content:encoded></item><item><title><![CDATA[Assuming everyone is a gullible moron]]></title><description><![CDATA[<p>Somebody lied on the internet. Happens every day. Yet, for some reason there&#x2019;s a great temptation to engage. A temptation to step in and correct the lie. I&#x2019;d say this is equally as true when the lie is obvious. I see this most on Twitter but</p>]]></description><link>https://www.elliotcsmith.com/everyone-is-a-gullible-moron/</link><guid isPermaLink="false">64899478882caf000149888e</guid><dc:creator><![CDATA[Elliot Smith]]></dc:creator><pubDate>Wed, 14 Jun 2023 10:35:54 GMT</pubDate><content:encoded><![CDATA[<p>Somebody lied on the internet. Happens every day. Yet, for some reason there&#x2019;s a great temptation to engage. A temptation to step in and correct the lie. I&#x2019;d say this is equally as true when the lie is obvious. I see this most on Twitter but it happens in most places with a comments section. There&#x2019;s something deeply tempting about stepping in to comment. For a long time, I wasn&#x2019;t sure what it was. Now I think it&#x2019;s ego. There is a little part of all of us that assumes everyone else is a gullible moron.</p><p>This isn&#x2019;t limited to lies on the internet. There are plenty of examples in which we assume we&#x2019;re the exception. Most of us, when asked, will report to be above average at driving, better looking and most and smarter than most of our peers. All of this despite the fact that statistically, that&#x2019;s impossible.</p><p>In much the same way, when we see something obviously wrong on the internet we&#x2019;re enticed to act. If we don&#x2019;t say something, how would anyone know this is wrong. If we leave it alone it might spread, it might become a &#x201C;well known fact&#x201D; that&#x2019;s built on a lie.</p><p>We become tempted to be the hero but in the end, we&#x2019;re taking the bait. The sign of a good troll is to be compelling enough to engage without being down right insane. Toe the line of truth just enough that we move to correct the narrative. We rise up swinging the sword of &#x201C;um, actually&#x201D; with the hope that we&#x2019;ll correct our way to victory.</p><p>Ideas like these become self reinforcing. Adding our comments helps to show just how lucky it was that we were there. Instead we need faith in each other. An understanding that we&#x2019;re not above the average, and that&#x2019;s okay. If we&#x2019;re nothing special and we can see behind the curtain, others can too. Trust that we&#x2019;re not all fools. A trust that extends beyond bandwagons on the internet.</p><p>If we can shift to a default of &#x201C;this is BS and everyone can see it&#x201D; we start to set different defaults. Defaults are a strangely powerful thing. They set us up for a lot of our downstream behaviour. When the default is trust and a sense that its us against the same forces we build community. We look to others and give the subtle head nod of &#x201C;look at these fools trying to pass this off as true&#x201D;. We lose the temptation to engage. There&#x2019;s no need to correct the lie, it&#x2019;s an obvious lie. We can shift our focus from engaging in pointless town squares and start focusing on real problems. Working together to make things better.</p>]]></content:encoded></item><item><title><![CDATA[What makes a model a Foundation Model?]]></title><description><![CDATA[<p>Major tech companies like Google, Meta and OpenAI have spent a large part of this year releasing what they call new foundation models. The world seems to be in a race to be the go to model to build new models and products. Despite all of the hype (or perhaps</p>]]></description><link>https://www.elliotcsmith.com/what-makes-a-model-a-foundation-model/</link><guid isPermaLink="false">644f1167f92714003d338caf</guid><dc:creator><![CDATA[Elliot Smith]]></dc:creator><pubDate>Mon, 01 May 2023 01:25:22 GMT</pubDate><content:encoded><![CDATA[<p>Major tech companies like Google, Meta and OpenAI have spent a large part of this year releasing what they call new foundation models. The world seems to be in a race to be the go to model to build new models and products. Despite all of the hype (or perhaps because of it), foundation models as a concept are still poorly defined.</p><p>I think it&apos;s important that we come up with some simple criteria that a model needs to meet to be considered &apos;foundational&apos;. As a developer, these are going to be developer centric and have a strong bias towards open source. Here&apos;s what I&apos;m thinking:</p><ol><li>Foundation models are pre-trained</li><li>Foundation models are general in nature</li><li>Foundation models are open source</li></ol><h2 id="foundation-models-are-pre-trained">Foundation models are pre-trained</h2><p>There are hundreds of new papers each week showing off new model architectures. Chances are that some of them are great but reproducibility in AI is still a big problem. If you want to call your model foundational it needs to be pre-trained and ship as a ready to run model.</p><p>That doesn&apos;t mean it needs to smash all benchmarks without fine-tuning but just proposing a model architecture is not providing a full foundation.</p><h2 id="foundation-models-are-general-in-nature">Foundation models are general in nature</h2><p>Foundation models are, in most cases, there to be a useful starting point for a number of applications. In many cases that will require fine tuning, or in some simple cases prompt wrangling.</p><p>ChatGPT and Midjourney are great because of their breadth. The applications and models built on top of them will likely be more specific and tuned to a single use case. This, to me, is the ideal scenario.</p><h2 id="foundation-models-are-open-source">Foundation models are open source</h2><p>Probably the most contentious of the three here. If you have an API as the only interface to your model, it&apos;s not a foundation model. It might still be a wonderful model (ChatGPT as an example) but it&apos;s not a foundation.</p><p>Open source here means open code, available weights and ideally access to the original training data. I&apos;d settle for the first two in a pinch but ideally it hits all three.</p><p>If we look at Stable Diffusion it hits at least the first two. Knowing if you can access all the training data is harder to verify but a good portion of that is open as well.</p><p>Once stable diffusion was released there was a flurry of activity on top of it. People built GUIs, tweaked the setup and fine tuned it to develop many new models. Similar activity happened when OpenAI released Whisper.</p><p>That kind of rapid creation and collaboration is what makes a model foundational.</p><p>ChatGPT and GPT4 on the other hand are API access only (at least at the time of writing). There will likely be many great products built on top of those APIs but they&apos;re not foundation models while they are locked away.</p><p>Similarly, Stable Diffusion is available as an API. That doesn&apos;t stop it being considered a foundation model. I am all for wrapping these models in managed services but the foundation of AI should be open.</p>]]></content:encoded></item><item><title><![CDATA[The Frontiers of Knowledge - Book Review]]></title><description><![CDATA[<p>Author: A. C. Grayling<br>Format: Audible<br>Category: Science - General</p><hr><p>I am currently looking to move house and in preparation I&apos;ve been trying to read books that have been on my shelf too long. These are books I have been meaning to get to for years. Often gifts</p>]]></description><link>https://www.elliotcsmith.com/the-frontiers-of-knowledge-book-review/</link><guid isPermaLink="false">634cc94523e8f5003d2f37c0</guid><category><![CDATA[books]]></category><dc:creator><![CDATA[Elliot Smith]]></dc:creator><pubDate>Mon, 24 Oct 2022 03:24:03 GMT</pubDate><content:encoded><![CDATA[<p>Author: A. C. Grayling<br>Format: Audible<br>Category: Science - General</p><hr><p>I am currently looking to move house and in preparation I&apos;ve been trying to read books that have been on my shelf too long. These are books I have been meaning to get to for years. Often gifts or books I thought were interesting last time I browsed a book store.</p><p>The Frontiers of Knowledge by A. C. Grayling was one of those books. I took advantage of a timely Audible credit and picked up the audiobook version, leaving the physical copy as something to pass on to someone else.</p><p>The Frontiers of Knowledge explores how humans have compounded our understanding of the world. From our early beginnings as early sapiens through to modern discoveries in quantum mechanics. Split into three sections, the book explores the history of progress in science, psychology and historical inquiry. In doing so, Grayling presents some commonalities and consistent stumbling blocks on our quest to further enlightenment.</p><p>Each section aims to begin as early in the development of that field as is useful. For science, this begins at the development of early tools. For historical inquiry, the earliest records of human kind. Through each topic we take an entertaining and broad path through discoveries and missteps on the path to our current understanding.</p><p>This high level review keeps the book moving at a comfortable pace. An expert in any of the topics covered will find their field described only at a high level. While this may be frustrating for the expert, the intended general audience for the book will likely appreciate the accessibility of the topics covered.</p><p>I listened to this book primarily whilst out walking the dog. It made for an entertaining walk and I picked up a few interesting historical tidbits along the way (did you know they used to flood the Colosseum for nautical combat displays).</p><p>When thinking about where the physical copy of this book will end up (assuming it doesn&apos;t end up in a general donation box) I&apos;ll be looking to pass it onto someone with a broad interest in science and humanities. Overall this was a good read and I&apos;d recommend it to anyone looking for a quick and broad description of human progress and how far we have come.</p>]]></content:encoded></item><item><title><![CDATA[To Sleep in a Sea  of Stars - Book Review]]></title><description><![CDATA[<p>Format: Audible 32 Hours<br>Author: Christopher Paolini<br>Genre: Science Fiction</p><p>Christopher Paolini, famous for the Eragon series, has taken a step into science fiction with the novel To Sleep in a Sea of Stars. The story follows a story of humanity expanding beyond the bounds of earth and coming face</p>]]></description><link>https://www.elliotcsmith.com/to-sleep-in-a-sea-of-stars-book-review/</link><guid isPermaLink="false">634ca73623e8f5003d2f37a7</guid><category><![CDATA[books]]></category><dc:creator><![CDATA[Elliot Smith]]></dc:creator><pubDate>Mon, 17 Oct 2022 00:54:40 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1462331940025-496dfbfc7564?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=MnwxMTc3M3wwfDF8c2VhcmNofDMxfHxzcGFjZXxlbnwwfHx8fDE2NjU5NjgwMzU&amp;ixlib=rb-1.2.1&amp;q=80&amp;w=2000" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1462331940025-496dfbfc7564?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=MnwxMTc3M3wwfDF8c2VhcmNofDMxfHxzcGFjZXxlbnwwfHx8fDE2NjU5NjgwMzU&amp;ixlib=rb-1.2.1&amp;q=80&amp;w=2000" alt="To Sleep in a Sea  of Stars - Book Review"><p>Format: Audible 32 Hours<br>Author: Christopher Paolini<br>Genre: Science Fiction</p><p>Christopher Paolini, famous for the Eragon series, has taken a step into science fiction with the novel To Sleep in a Sea of Stars. The story follows a story of humanity expanding beyond the bounds of earth and coming face to face with what lies beyond. Overall the novel, which I listened to over thirty-two hours on Audible, provided an entertaining story but moved too quickly to explore some of its more interesting ideas.</p><p>The story follows Kira Nav&#xE1;rez, an off world xenobiologist on an otherwise routine mission to document signs of life on a far out planet. In a somewhat cliche &quot;don&apos;t go in the haunted house&quot; moment, Nav&#xE1;rez falls into a long abandoned alien warehouse. She merges with an unknown alien technology in the form of a sentient suit. This had parallels to the black ka&apos;kari from Brent Week&apos;s night angel trilogy and a continued theme is Nav&#xE1;rez learning to use, and trust her new companion.</p><p>Awakening this technology alerts a hostile alien race, one of the story&apos;s antagonists throughout. Nav&#xE1;rez is captured, escapes and is ultimately picked up by a roving ship harboring refugees. The crew of this ship forms the remainder of the main cast with a varied collection of back stories and roles.</p><p>Nav&#xE1;rez and the ship&apos;s crew explore the origin of the alien technology and face off against the alien threat. Here readers begin to survey the wider universe that Paolini is building in the novel. While much of the world building was engaging, the novel felt like a drive-by of potential future stories. This was &#xA0;best characterized in the ending which was largely a setup for future storytelling.</p><p>The audio book format worked well for this novel. Voice acting was consistent and engaging. Given Jennifer Hale&apos;s career voicing roles in games like Mass Effect this is no surprise.</p><p>There was relationship development between the cast but much of that felt skin deep. While I finished listening with some curiosity for the future, it was more aimed at the overall story of humanity. I was not overly attached to any individuals or their relationships with one another.</p><p>As I listened to the final third of the story I was making my way through Stray, a semi-dystopian video game featuring a small cat. The storytelling partnered with visuals of a run down future tied together well. The semi-hopeless feel of Stray&apos;s robot inhabited city characterize the tone of the story. Humanity has expanded, grown its reach, but in a way that feels more robotic than organic. More driven by routine than a quest for the unknown.</p><p>I will be keeping an eye out for future novels in this new world but don&apos;t yet feel the same connection as I do to other multi-book projects like the cosmere. If you&apos;re in the mood for a light and interesting space opera, I would recommend this book. If you&apos;re looking for deep and technical science fiction, perhaps look elsewhere.</p>]]></content:encoded></item><item><title><![CDATA[Findings #10]]></title><description><![CDATA[<p>This week has been all about AI sentience. This begs some deeper posts but I will touch on the high-level ideas here.</p><p>It all started when a Google employee made a call that LaMDA was sentient. LaMDA is an AI that creates chatbots. After several conversations, the employee felt the</p>]]></description><link>https://www.elliotcsmith.com/findings-10/</link><guid isPermaLink="false">62a700d5a4c44b004d68b71a</guid><dc:creator><![CDATA[Elliot Smith]]></dc:creator><pubDate>Mon, 13 Jun 2022 09:43:07 GMT</pubDate><content:encoded><![CDATA[<p>This week has been all about AI sentience. This begs some deeper posts but I will touch on the high-level ideas here.</p><p>It all started when a Google employee made a call that LaMDA was sentient. LaMDA is an AI that creates chatbots. After several conversations, the employee felt the model had the same intelligence as a seven or eight-year-old child.</p><p>Look, honestly, it&apos;s a dubious claim. But there are some fascinating aspects to this. Before I get into them, here&apos;s the (curated) transcript between the AI and two Google employees.</p><p><a href="https://cajundiscordian.medium.com/is-lamda-sentient-an-interview-ea64d916d917/?ref=elliotcsmith.com">https://cajundiscordian.medium.com/is-lamda-sentient-an-interview-ea64d916d917</a></p><p>First, the chat log is curated, edited and very selective. A red flag on its own. Secondly, we need to remember this is an AI trained to answer questions. Trained on millions of questions across the internet, this AI knows how to replicate answers.</p><p>The bigger question here, is what does it even mean for an AI to be sentient, conscious, or anything other than a pattern matcher and regurgitator? If we really want to get answers on this, we need better definitions. Most people I know that know more than a tutorial&apos;s worth of AI think that while clever, this AI is far from conscious.</p><p>On the back of this claim, there have been a series of rebuttals. They range from technical deep dives into more broad-brush claims that this is nothing more than human folly.</p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://garymarcus.substack.com/p/nonsense-on-stilts?s=r&amp;ref=elliotcsmith.com"><div class="kg-bookmark-content"><div class="kg-bookmark-title">Nonsense on Stilts</div><div class="kg-bookmark-description">No, LaMDA is not sentient. Not even slightly.</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://substackcdn.com/icons/substack/apple-touch-icon-1024x1024.png" alt><span class="kg-bookmark-author">The Road to AI We Can Trust</span><span class="kg-bookmark-publisher">Gary Marcus</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://substackcdn.com/image/fetch/w_1200,h_600,c_limit,f_jpg,q_auto:good,fl_progressive:steep/https%3A%2F%2Fpbs.substack.com%2Fmedia%2FFVEHQukUsAAOe5i.jpg" alt></div></a></figure><figure class="kg-card kg-embed-card"><blockquote class="twitter-tweet"><p lang="en" dir="ltr">this LaMDA &#x201C;interview&#x201D; transcript is a great case study of the cooperative nature of AI theater. the human participants are constantly steering back toward the point they&#x2019;re trying to prove &amp; glossing over generated nonsense, plus editing after the fact <a href="https://t.co/2TtGmBuN6O?ref=elliotcsmith.com">https://t.co/2TtGmBuN6O</a></p>&#x2014; Max Kreminski (@maxkreminski) <a href="https://twitter.com/maxkreminski/status/1535816616700628992?ref_src=twsrc%5Etfw&amp;ref=elliotcsmith.com">June 12, 2022</a></blockquote>
<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>
</figure><p>Ironically, on the same day I came across a post on just how limited another AI from Google is:</p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://shkspr.mobi/blog/2022/06/googles-ai-doesnt-understand-restaurant-menus/?ref=elliotcsmith.com"><div class="kg-bookmark-content"><div class="kg-bookmark-title">Google&#x2019;s AI Doesn&#x2019;t Understand Restaurant Menus</div><div class="kg-bookmark-description">In the glorious future, every website will be chock-full of semantic metadata. Restaurants won&#x2019;t have a 50MB PDF explaining the chef&#x2019;s vision for organic cuisine &#x2013; instead, they&amp;#&#x2026;</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://shkspr.mobi/blog/wp-content/uploads/2020/02/cropped-avatar-270x270.jpg" alt><span class="kg-bookmark-author">Terence Eden&#x2019;s Blog</span><span class="kg-bookmark-publisher">edent</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://shkspr.mobi/blog/wp-content/uploads/2022/05/tikka.jpeg" alt></div></a></figure><p>One thing that fascinated me throughout all this was how quickly people switched to thinking LaMDA was in fact sentient. Comments on the transcript are full of people worried about the well-being of the AI and the risks of turning it off.</p><p>What this signals is the need for rules and tests on these topics. Convincing humans of sentience and true sentience are shaping up to be two very different things. Without a set of criteria (knowing full well they may change) we can&apos;t do anything other than arguing over semantics. It seems we might be closer than I predicted to mourning for AI.</p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://www.elliotcsmith.com/will-we-mourn-for-siri/"><div class="kg-bookmark-content"><div class="kg-bookmark-title">Will we mourn for Siri?</div><div class="kg-bookmark-description">I remember the first time I said &#x201C;thank you&#x201D; to Siri. I wasn&#x2019;t paying attentionand had asked for a timer to be set. Nothing particularly difficult, but withoutmuch thought, out slipped &#x201C;thank you&#x201D;. To be clear, I don&#x2019;t think Siri is doingwhat I ask out of politeness</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://www.elliotcsmith.com/content/images/size/w256h256/2021/06/apple-touch-icon.png" alt><span class="kg-bookmark-author">Elliot C Smith</span><span class="kg-bookmark-publisher">Elliot Smith</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://images.unsplash.com/photo-1603184017968-953f59cd2e37?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=MnwxMTc3M3wwfDF8c2VhcmNofDJ8fFNpcml8ZW58MHx8fHwxNjQ4MzY5NjYz&amp;ixlib=rb-1.2.1&amp;q=80&amp;w=2000" alt></div></a></figure><p>Much like DALLE-2 and its secret language, what&apos;s going on here is likely just good pattern matching. These AI are trained to mimic real-world behaviour, we should expect them to eventually get good enough to fool us.</p><p>There is, however, an alternative. As hard to believe as it may be. It could be that these AI are in fact sentient. Not as young children, but as squirrels.</p><figure class="kg-card kg-embed-card"><blockquote class="twitter-tweet"><p lang="en" dir="ltr">Stunning transcript proving that GPT-3 may be secretly a squirrel.<br>GPT-3 wrote the text in green, completly unedited! <a href="https://t.co/DwUjiXOZuY?ref=elliotcsmith.com">pic.twitter.com/DwUjiXOZuY</a></p>&#x2014; Janelle Shane (@JanelleCShane) <a href="https://twitter.com/JanelleCShane/status/1535835610396692480?ref_src=twsrc%5Etfw&amp;ref=elliotcsmith.com">June 12, 2022</a></blockquote>
<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>
</figure><h2 id="from-me">From Me</h2><p>Tom and I have been putting out our podcast. Our most recent episode was our first foray into a live show. Check it out here:</p><figure class="kg-card kg-embed-card"><iframe width="200" height="113" src="https://www.youtube.com/embed/sCuOjWebzGc?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe></figure>]]></content:encoded></item></channel></rss>