<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"><channel><title><![CDATA[SageSure Tech Blog]]></title><description><![CDATA[A peek into the problems we face and the trade-offs we make towards solving them.]]></description><link>https://tech.sagesure.com/blog/feed.xml</link><language>en-us</language><item><title><![CDATA[Breaking free of the legacy straitjacket - modernizing insurance systems]]></title><link>https://tech.sagesure.com/blog/2025-12-08-modernizing-insurance-systems/</link><description><![CDATA[<p>SageSure’s growth story has been phenomenal, and it has been a challenge for our applications to keep up with business expansion. Early on, SageSure invested in building applications based on relational database (RDBMS) technology. Core policy functions like quoting, binding, policy administration, endorsements, and billing relied on a tapestry of stored procedures, triggers, and application-level code. A few years back our engineering team identified Temporal Workflows as a powerful solution in modernizing our intricate, stateful business processes.</p>
<h3 id="challenges-of-rdbms-driven-workflows">Challenges of RDBMS-driven workflows</h3>
<p>The growth in our business has introduced specific complexities that exacerbates the pain points of traditional RDBMS-managed workflows:</p>
<ul>
<li>
<p><strong>Rapid Product Development</strong>: In dynamic insurance markets, rules and regulations change frequently, and new product offerings are crucial for our competitive advantage. The tight coupling of business logic to stored procedures significantly slows down the speed at which we can introduce and iterate on new products or pricing models.</p>
</li>
<li>
<p><strong>Catastrophic Event Scalability</strong>: Following a hurricane or other major weather event, claims volume can spike exponentially. Our DB-driven, monolithic systems become bottlenecks, leading to delays in processing critical claims and impacting our policyholder satisfaction during their greatest time of need.</p>
</li>
<li>
<p><strong>Audit and Compliance</strong>: Tracing the exact sequence of events for regulatory compliance or internal audits in a DB-driven workflow often means piecing together scattered data points.</p>
</li>
<li>
<p><strong>Retry Logic for External Integrations</strong>: We rely heavily on external services like payment processing, geolocations and more. Implementing robust retry mechanisms with back-off for these critical external calls within a DB-driven system is incredibly challenging and a common source of subtle, hard-to-diagnose failures.</p>
</li>
<li>
<p><strong>Limited Real-time Operational Visibility</strong>: Gaining real-time insights into the progress of a complex commission payment system or a high-volume policy registration is challenging. Identifying bottlenecks or policy application fallouts often relies on batch reporting or deep database queries, hindering our proactive intervention.</p>
</li>
</ul>
<h3 id="enter-temporal-durable-execution">Enter Temporal: durable execution</h3>
<p>Temporal is an open-source, distributed system that enables us to write durable, fault-tolerant, and scalable workflows as code. Instead of relying on a database for state management and orchestration, Temporal provides a dedicated platform that ensures our workflow executions are:</p>
<ul>
<li>
<p><strong>Durable</strong>: Workflow state is automatically persisted, so even if our services crashes, the workflow execution will resume exactly where it left off. No lost data, no manual recovery of in-flight policies is needed.</p>
</li>
<li>
<p><strong>Fault-Tolerant</strong>: Temporal handles retries, timeouts, and error handling automatically for external calls (e.g., to third-party data providers, payment processors, or external rating engines). We define the business logic, and Temporal ensures it executes reliably, even in the face of transient network issues or external system outages.</p>
</li>
<li>
<p><strong>Scalable</strong>: Temporal is designed for high-throughput, long-running workflows, allowing us to scale our underwriting, policy processing, and high-volume claims handling independently of our application’s compute resources, crucial during CAT events.</p>
</li>
<li>
<p><strong>Observable</strong>: Every workflow execution has a complete, queryable event history, making it incredibly easy to debug, audit for compliance, and understand the real-time status of any policy application, claim, or billing event from initiation to completion. No more playing detective with fragmented logs; the full story is always there.</p>
</li>
</ul>
<h3 id="legacy-services-migration">Legacy services migration</h3>
<p>Migrating the intricate, stateful processes within our legacy RDBMS based applications to Temporal typically involves identifying and extracting the core business processes currently orchestrated within the database and application code. Here’s our conceptual approach:</p>
<ul>
<li>
<p><strong>Identify Core Workflows</strong>: We analyzed our existing systems to pinpoint sequences of operations that represent a complete business process (e.g., “New Policy Application, Underwriting &#x26; Issuance,” “Claims management,” “Premium Billing &#x26; Collection with Payment Plan Management,” “Policy Endorsement &#x26; Mid-term Adjustments”). These are our prime candidates for Temporal Workflows.</p>
</li>
<li>
<p><strong>Decouple and Extract Logic</strong>:</p>
<ul>
<li>
<p><strong>Stored Procedures/Triggers</strong>: We converted the complex business rules and data manipulation logic embedded in the stored procedures and triggers into independent Temporal Activities. Activities are typically short-lived, idempotent operations that interact with external systems, like our existing database, other microservices, or third party APIs.</p>
</li>
<li>
<p><strong>Application-Level State &#x26; Orchestration</strong>: We replaced the application code that manages policy or payment or endorsement state in database tables often with complex status machines and polling loops with Temporal’s durable workflow execution.</p>
</li>
</ul>
</li>
<li>
<p><strong>Define Temporal Workflows</strong>: We wrote our core insurance processes as Temporal Workflows using Temporal’s Java SDK. Our Workflow code would orchestrate the execution of Activities, representing the steps of our insurance process.</p>
</li>
<li>
<p><strong>Implement Temporal Activities</strong>: We created the Activity implementations that encapsulate the specific actions. These Activities contain the actual calls to our database (e.g., updating policy statuses), interacting with legacy systems via APIs (e.g., integrating with existing policy admin systems or billing platforms), or connecting to new microservices (e.g., High Definition property pictures).</p>
</li>
</ul>
<h4 id="example-policy-payment-processing-workflow-using-java">Example: policy payment processing workflow using Java</h4>
<figure>
  <img src="/assets/blog/2025-12-08-code-sample.jpg" width="721" height="564" alt="Sample code.">
</figure>
<h3 id="temporal-batch-iterator-pattern-for-large-scale-data-processing">Temporal Batch Iterator pattern for large-scale data processing</h3>
<p>One of the most challenging aspects of migrating our legacy insurance systems is dealing with vast amounts of existing data that needs to be processed, for example, monthly statement generation. Traditionally, this involved complex, fragile batch jobs, often orchestrated via database agent or custom scripts, which are prone to failure and difficult to monitor.</p>
<p>Temporal’s Batch Iterator Pattern provides a powerful, durable, and observable way to handle these large-scale data processing tasks. Instead of trying to process a large dataset within a single, long-running workflow, the pattern works as follows:</p>
<ul>
<li>
<p><strong>Orchestrator Workflow</strong>: A main “Orchestrator” Workflow is responsible for:</p>
<ul>
<li>Querying our database via an Activity to identify a batch of records to process.</li>
<li>Spawning a separate Child Workflow for each individual record or small group of records within that batch e.g., one child workflow per policy payment record.</li>
<li>Maintaining its own state to track which batches have been processed and to aggregate results from the child workflows.</li>
</ul>
</li>
<li>
<p><strong>Child Workflows</strong>: Each Child Workflow handles the processing logic for its specific record. These child workflows benefit from all of Temporal’s durability, fault tolerance, and retry capabilities.</p>
</li>
<li>
<p><strong>Durable Iteration</strong>: If the Orchestrator Workflow or the Activity querying the database fails, Temporal ensures it resumes from the last known state. It won’t re-process records that have already had child workflows spawned for them, and it can pick up the next batch seamlessly. This significantly increases the reliability of large-scale batch operations that were once a source of significant operational overhead.</p>
</li>
</ul>
<h3 id="the-batch-iterator-pattern-is-ideal-for-tasks-like">The Batch Iterator pattern is ideal for tasks like:</h3>
<ul>
<li>
<p><strong>Mass Policy Renewals</strong>: Spawning a workflow for each policy due for renewal, managing complex premium calculations and endorsements.</p>
</li>
<li>
<p><strong>Statement/Billing Cycle Generation</strong>: Creating a workflow per customer to generate their annual statement or manage complex installment billing..</p>
</li>
<li>
<p><strong>Data Migration &#x26; Transformation</strong>: Processing millions of legacy records to migrate them to a new system or format, ensuring each record is processed durably.</p>
</li>
<li>
<p><strong>Regulatory &#x26; Compliance Reporting</strong>: Generating individual reports or performing checks on a large number of entities.</p>
</li>
</ul>
<h4 id="sequence-diagrams-illustrating-the-batch-iterator-pattern">Sequence diagrams illustrating the Batch Iterator Pattern:</h4>
<figure>
  <img src="/assets/blog/2025-12-08-parent_workflow.svg" width="1060" height="405" alt="Sequence diagram of Orchestrator Workflow.">
</figure>
<h4 id="explanation-of-sequence-diagram-1">Explanation of sequence diagram 1</h4>
<ol>
<li>
<p>A user or an application kicks off our main “Orchestrator Workflow”.</p>
</li>
<li>
<p>The Orchestrator Workflow runs in a loop. In each iteration, it calls an Activity, executed by an Activity Worker.</p>
</li>
<li>
<p>This loadData Activity interacts with the database to fetch a defined “batch” of records, often using pagination.</p>
</li>
<li>
<p>Once a batch is received by the Orchestrator, it then, for each record in that batch, starts a Child Workflow.</p>
</li>
<li>
<p>Each Child Workflow is a separate, independent, and durable execution that handles the specific processing for that single record.</p>
</li>
<li>
<p>The Orchestrator maintains its internal state (like lastProcessedId) to ensure that if it fails and restarts, it knows where to resume fetching the next batch without duplication.</p>
</li>
<li>
<p>The loop continues until no more records are found in the legacy database.</p>
</li>
</ol>
<figure>
  <img src="/assets/blog/2025-12-08-child_workflow.svg" width="1060" height="405" alt="Sequence diagram of Child Workflow.">
</figure>
<h4 id="explanation-of-sequence-diagram-2">Explanation of sequence diagram 2</h4>
<ol>
<li>
<p>This diagram shows a detailed view of what happens within a single Child Workflow (e.g., Payment Processing Workflow).</p>
</li>
<li>
<p>It orchestrates a series of Activities like processing a payment or sending email communication..</p>
</li>
<li>
<p>Each Activity interacts with an external service – here, our PaymentProcessingService and EmailService.</p>
</li>
<li>
<p>Temporal’s built-in retry mechanisms would handle transient failures for any of these Activities, ensuring the overall policy payment or email communication processes are robust</p>
</li>
<li>
<p>Upon completion, the Child Workflow’s status is known by the Temporal Server, which can be observed by the Orchestrator.</p>
</li>
</ol>
<h3 id="key-benefits-of-temporal-in-sagesures-modernization-journey">Key benefits of Temporal in SageSure’s modernization journey</h3>
<ul>
<li>
<p><strong>Reliability &#x26; Durability</strong>: Temporal guarantees workflow execution to completion, even if our systems crash. This is crucial for financial integrity, regulatory compliance, and maintaining policyholder trust.</p>
</li>
<li>
<p><strong>Simplified Complexity for Business Logic</strong>: Complex retry logic for external integrations, long-running processes, and compensation patterns are built into Temporal’s programming model. This significantly reduces the amount of fragile, boilerplate code we need to write and maintain.</p>
</li>
<li>
<p><strong>Improved Scalability</strong>: Decouple our core business logic from database contention. Temporal scales horizontally, allowing us to handle a massive number of concurrent policy applications, complex billing cycles, ensuring responsiveness and operational continuity even during peak demand.</p>
</li>
<li>
<p><strong>Accelerated Product Innovation</strong>: We can write complex insurance processes as clear, imperative code. The clean separation of Workflow and Activity logic makes it easier for our product managers, and developers to quickly design, test, and deploy new insurance products, adjust commission rules, or modify existing claims processes to respond to market changes.</p>
</li>
<li>
<p><strong>Future-Proofing Tech</strong>: We can move away from a monolithic, tightly coupled design towards a more distributed, microservices friendly architecture, paving the way for advanced geospatial analytics, AI-driven risk assessment, real-time data integration, and other future modernization efforts crucial for a leader in the Insurance Tech space.</p>
</li>
</ul>
<h3 id="conclusion">Conclusion</h3>
<p>Migrating our core workflows from legacy RDBMS based applications to Temporal was not just about fixing immediate problems; it was about transforming the fundamental way SageSure operates. It liberated our critical business logic from the confines of outdated architectures, allowing us to innovate faster, improve customer experience, reduce operational risk, and build systems that are ready for the demands and evolution of the modern Insurance Tech landscape.</p>]]></description><pubDate>Mon, 08 Dec 2025 00:00:00 GMT</pubDate></item><item><title><![CDATA[Constant management: practices and strategies]]></title><link>https://tech.sagesure.com/blog/2023-11-16-constant-management-practices-and-strategies/</link><description><![CDATA[<p>In software development, constants are used to store fixed data that remains the same throughout the program’s execution. The definition of constant is simple. However, managing constants in complicated projects can be a headache sometimes. I worked on the project Agent Portal(AP), and we had a technical debt on constant management. This blog post will talk about strategies we use to manage constants in AP, as well as some other strategies that people use to manage constants in software applications.</p>
<h2 id="constant-management-in-ap">Constant management in AP</h2>
<p>A good constant should have at least the following characteristics</p>
<blockquote>
<ol>
<li>It stores fixed values that have been used multiple times in different places in the codebase.</li>
<li>It has a meaningful name. Usually the name describes the purpose of constant or the data they represent.</li>
<li>It has a scope.</li>
<li>It has a specific data type.</li>
</ol>
</blockquote>
<p>As a result, I get a React functional component that uses TypeScript. Here are the type annotations that I added. They clearly indicate the types of each prop.</p>
<p>This is also a guide in AP when we create a constant. In AP, we manage constants from two scopes: global and local. Global constants are usually shared in different components, and local constants are usually for first level or second level components. All constants are started as a local constant, and the name convention for the constant file is <code data-astro-raw>&#x3C;component-name>.constants.ts</code></p>
<figure>
  <img src="/assets/blog/2023-09-01-constant-file-naming.png" width="560" alt="How we name our constants file">
  <figcaption>How we name our constants file</figcaption>
</figure>
<p>At global level, there is a file called <code data-astro-raw>constants.ts</code> that imports and exports all local constants files, which makes it easier to share local constants across the components. This global constant file also contains project level constants which clearly the scope is global.</p>
<figure>
  <img src="/assets/blog/2023-09-01-constant-file-structure.png" width="300" alt="constants file structure">
  <figcaption>global constants file location relative to components</figcaption>
</figure>
<figure>
  <img src="/assets/blog/2023-09-01-import-export-shared-constant-file.png" width="960" alt="Shared constants files">
  <figcaption>Shared constants files in global constants file</figcaption>
</figure>
<figure>
  <img src="/assets/blog/2023-09-01-import-export-shared-constant.png" width="560" alt="Shared constants">
  <figcaption>Shared constants and icon file in global constants file</figcaption>
</figure>
<p>The pros of this strategy is that it’s easy to locate constants and all constants are rooted from one place. This approach also simplifies updates and modifications.</p>
<p>The cons is that it requires a clear and thoughtful categorization scheme. Be Careful when introducing the new constants, and give it the right scope.</p>
<h2 id="other-strategies-to-manage-constants">Other strategies to manage constants</h2>
<h4 id="1-centralized-constants-file">1. Centralized Constants File:</h4>
<blockquote>
<p>Pros:</p>
<ul>
<li>Easy to locate and manage constants in a single file.</li>
<li>Enhances consistency as all constants are in one place.</li>
<li>Simplifies updates and modifications.</li>
</ul>
</blockquote>
<blockquote>
<p>Cons:</p>
<ul>
<li>Can become overwhelming as the codebase grows.</li>
<li>May require careful organization and categorization to maintain readability.</li>
</ul>
</blockquote>
<h4 id="2-external-configuration">2. External Configuration:</h4>
<blockquote>
<p>Pros:</p>
<ul>
<li>Allows for configuration adjustments without modifying the code.</li>
<li>Useful for constants that change based on different environments (e.g., URLs, API keys).</li>
<li>Separates concerns by keeping configuration separate from application logic.</li>
</ul>
</blockquote>
<blockquote>
<p>Cons:</p>
<ul>
<li>Introduces external dependencies and potential configuration errors.</li>
<li>Might lead to confusion if not well-documented or version-controlled.</li>
</ul>
</blockquote>
<ol start="3">
<li>Using Frameworks or Libraries:</li>
</ol>
<blockquote>
<p>Pros:</p>
<ul>
<li>Leverage established practices and patterns for managing constants.</li>
<li>Frameworks may provide additional tools for validation, configuration, etc.</li>
</ul>
</blockquote>
<blockquote>
<p>Cons:</p>
<ul>
<li>May introduce a learning curve for using the specific framework.</li>
<li>Could lead to unnecessary dependencies if not carefully considered.</li>
</ul>
</blockquote>
<ol start="4">
<li>Application-Specific Constants Classes:</li>
</ol>
<blockquote>
<p>Pros:</p>
<ul>
<li>Encapsulates constants within classes, offering better organization.</li>
<li>Provides a clear naming convention by using class namespaces.</li>
</ul>
</blockquote>
<blockquote>
<p>Cons:</p>
<ul>
<li>Might introduce additional complexity for small projects with limited constants.</li>
<li>Requires adherence to the class-based approach throughout the codebase.</li>
</ul>
</blockquote>
<ol start="5">
<li>Enums (Enumerations):</li>
</ol>
<blockquote>
<p>Pros:</p>
<ul>
<li>Provides a clear and expressive way to represent related constants.</li>
<li>Enhances type safety as enums restrict values to a predefined set.</li>
<li>Reduces the chance of using arbitrary integer or string values.</li>
</ul>
</blockquote>
<blockquote>
<p>Cons:</p>
<ul>
<li>Limited flexibility in cases where the values are not enumerable.</li>
<li>Adds some complexity when compared to simple numeric or string constants.</li>
</ul>
</blockquote>
<ol start="6">
<li>Database or External Storage:</li>
</ol>
<blockquote>
<p>Pros:</p>
<ul>
<li>Offers the flexibility to change constants without redeploying the application.</li>
<li>Useful for frequently changing values or for non-technical users to manage.</li>
</ul>
</blockquote>
<blockquote>
<p>Cons:</p>
<ul>
<li>Introduces performance overhead when fetching values from an external source.</li>
<li>Can complicate deployment and version control, especially in distributed systems.</li>
</ul>
</blockquote>
<h2 id="conclusion">Conclusion</h2>
<p>In conclusion, there is no one-size-fits-all strategy for managing constants. The best approach depends on your project’s size, complexity, team dynamics, and future scalability requirements. A combination of these strategies might be necessary to strike the right balance between maintainability, readability, and flexibility. Whichever strategy you choose, ensure that your team is aligned and that the chosen approach is well-documented to facilitate seamless collaboration and future development.</p>]]></description><pubDate>Thu, 16 Nov 2023 00:00:00 GMT</pubDate></item><item><title><![CDATA[Learning TypeScript through refactoring]]></title><link>https://tech.sagesure.com/blog/2023-10-01-learning-typescript-through-refactoring-react-components-written-in-javascript/</link><description><![CDATA[<p>The Agent Portal (AP) is a modern web application built using JavaScript and React. The Agent Portal (AP) is the entry point to the policy origination system, offering essential access points for the “agent-user segment.” I worked on refactoring AP using TypeScript. This blog is about my experience of refactoring React components and learning about TypeScript. During the refactoring, I updated React components to use the newer functional component style and added TypeScript type annotations.</p>
<h2 id="my-refactoring-process">My Refactoring Process</h2>
<p>I needed to update the code that was written using the old style of react class components. For example, the component “SearchableText” is written using the React class component.</p>
<figure>
  <img src="/assets/blog/2023-08-31-old-styled-react-component.png" width="600" alt="Screen shot of an old styled React component">
  <figcaption>Old styled React component</figcaption>
</figure>
<blockquote>
<ol>
<li><strong>Convert the code to use React functional components.</strong> Going through the process of updating the component helps me to understand the code.</li>
<li><strong>Console log the component props to confirm their types.</strong> This step can be a challenge sometimes. For example, if a component has a prop that is a > child prop passed from a parent component, then I need to track down the parent component to figure the types.</li>
<li><strong>Use command <code data-astro-raw>git mv \***.js **\*.tsx</code>to rename the file to use the<code data-astro-raw>.tsx</code> extension,</strong> which supports TypeScript. Using <code data-astro-raw>git mv</code> is important for Git to automatically detect the rename and update the history accordingly.</li>
<li><strong>Add TypeScript type annotations and Interfaces.</strong> The challenge of this step is to understand the codebase and determine the appropriate types for variables and functions. Sometimes, bugs will be revealed in the code as I add type annotations. Fixing these bugs as they crop up can be time-intensive.</li>
<li><strong>Run unit tests and update unit tests accordingly.</strong> This step helps me guarantee that the refactored code is still valid and can run successfully.</li>
</ol>
</blockquote>
<p>As a result, I get a React functional component that uses TypeScript. Here are the type annotations that I added. They clearly indicate the types of each prop.</p>
<figure>
  <img src="/assets/blog/2023-08-31-typescript-interface.png" width="360" alt="TypeScript Interface Example">
  <figcaption>TypeScript interface example</figcaption>
</figure>
<h2 id="what-i-learned-about-typescript-while-refactoring">What I Learned About TypeScript While Refactoring</h2>
<ul>
<li>
Using TypeScript helps catch potential bugs. For example, the following function will not show errors in JavaScript.
<br>


<br>
<figure>
<img src="/assets/blog/2023-08-31-function-add-numbers.png" width="700" alt="JavaScript Function Add Numbers">
<figcaption>JavaScript function addNumbers</figcaption>
</figure>
<br>
But if we write with TypeScript support, the type error will be caught.<br>
<br>
<figure>
<img src="/assets/blog/2023-08-31-typescript-function-add-number.png" width="700" alt="TypeScript Function Add Numbers">
<figcaption>TypeScript function addNumbers</figcaption>
</figure>
<br>
</li>
<li>
Using TypeScript improves code readability. Type annotations make the code more self-documenting, and I can immediately understand what types of values the function can accept and return. For example, in the following code, it’s clear that 
<code data-astro-raw>setPage</code>
 is a dispatch action.
<br>


<br>
<figure>
<img src="/assets/blog/2023-08-31-typescript-type-counter.png" width="700" alt="TypeScript Counter">
<figcaption>TypeScript type annotations</figcaption>
</figure>
<br>
</li>
<li>
Using TypeScript improves handling of 
<code data-astro-raw>null</code>
 and 
<code data-astro-raw>undefined</code>
. For example, the following JavaScript code might return 
<code data-astro-raw>null</code>
 or 
<code data-astro-raw>undefined</code>
 in the case of invalid inputs.
<br>


<br>
<figure>
  <img src="/assets/blog/2023-08-31-function-with-null.png" width="700" alt="Function With Null">
  <figcaption>Code example contains null value</figcaption>
</figure>
<br>
Using TypeScript allows me to specify the type annotations for function parameters and return values. In this case, the function is explicitly defined to return either a `number` or `null`, providing better clarity about its behavior.<br>
<br>
<figure>
<img src="/assets/blog/2023-08-31-typescript-handling-null.png" width="700" alt="TypeScript Handling Null">
<figcaption>Explicitly defined to return either a `number` or `null`</figcaption>
</figure>
<br>
</li>
<li>
<p>TypeScript’s type system catches potential errors early in development. If I accidentally use the <code data-astro-raw>result</code> variable without checking for null, TypeScript would raise a compilation error.</p>
</li>
<li>
<p>Interfaces can define the structure of an object, specifying the properties and their types. This aids in creating contracts between different parts of the codebase and promotes code reusability.</p>
</li>
<li>
<p>TypeScript also can infer types based on context, even if types are not explicitly specified. This reduces the need for explicit type annotations while still providing type checking.</p>
</li>
<li>
<p>TypeScript can work with existing JavaScript libraries by using type declarations or type definition files (.d.ts files) to provide type information for JavaScript code.</p>
</li>
<li>
<p>TypeScript supports union types (values that can be any one of multiple types) and intersection types (values that must satisfy multiple types simultaneously).</p>
</li>
</ul>
<figure>
  <img src="/assets/blog/2023-08-31-typescript-interface-containing-union-type.png" width="400" alt="TypeScript Union Type">
  <figcaption>Example of an interface containing a union type</figcaption>
</figure>
<h2 id="learning-through-practical-application">Learning Through Practical Application</h2>
<p>It is hard to acquire new knowledge without applying what you’re learning. The refactoring process is a great opportunity to learn about TypeScript. Instead of just reading documentation and practicing using example code/code playgrounds, refactoring is a real world application. Every challenge that I encountered during refactoring was helpful to make the learning process more natural.</p>
<h2 id="conclusion">Conclusion</h2>
<p>Technology develops fast and refactoring will keep a product alive and more competitive in terms of performance. I am a fan of refactoring not just because I personally benefit a ton from it, but also due to the benefits of having high quality code and better performing projects. Having a habit of refactoring code is a double win.</p>
<p>In summary, there are many ways to acquire programming knowledge, and I find my best and favorite way is through refactoring. AP is a perfect project for me to keep improving my programming skills, allowing me to enhance my knowledge during refactoring. The knowledge that I gain through refactoring is long-term knowledge, because I was able to not only learn about new topics but to apply it to the AP project.</p>]]></description><pubDate>Sun, 01 Oct 2023 00:00:00 GMT</pubDate></item><item><title><![CDATA[Scrum at SageSure]]></title><link>https://tech.sagesure.com/blog/2023-09-08-tuning-agile-for-todays-needs/</link><description><![CDATA[<p>There are many ceremonies around Agile: sprint planning, daily scrum, sprint review, and sprint retrospective to name a few. Teams usually take Agile and shape it to what fits their culture and team. During the last few months I have observed the majority of our Software teams performing their daily scrum, and have some observations and tips on better practices I want to share with you all.</p>
<h2 id="some-observations">Some observations</h2>
<p>Agile methodology teaches us that the daily scrum is 15 minutes, where each member of the team shares what they did yesterday, what they will do today, and if they have blockers. No more and no less.</p>
<p>This is the methodology, but Agile needs to be flexible to live up to its name and purpose. Now that Software teams have become more hybrid and co-located, the daily scrum can be repetitive and boring, causing team members to lose investment in the meeting. The heart of scrum can remain: a team update and work progress touchpoint, but the format needs to be willing to change.</p>
<ul>
<li>
<p><strong>Yesterday’s update.</strong> If a team is working very closely together, they already know what the members did yesterday. They were most likely doing the work along side each other. If your team is working in lock step, I would recommend completely eliminating the portion of scrum where each member reviews what they did yesterday.</p>
</li>
<li>
<p><strong>15-minutes.</strong> Working from home means less watercooler time or less casual conversion when people arrive in the office, or mingle around lunch. Invest in your team and don’t restrict the meeting to 15 minutes. The extra time can be used to further discuss blocking issues, or share upcoming weekend plans.</p>
</li>
<li>
<p><strong>Person-by-person.</strong> Variety can create a higher engagement level from the team members. Some teams prefer person-by-person (traditional) updates during the stand-up, allowing everyone to speak and share their progress. Other teams opt for a Split plan stand up, discussing initiatives one by one. When speaking through initiates, assigning one person to be a primary owner, or champion, is valuable. This ensures that no single person dominates the entire meeting and helps reinforce which team member is the primary contact for that initiative. Some teams mix and match these update styles. Some days, they go by person, and some days, they go by initiative.</p>
</li>
</ul>
<h2 id="better-practices">Better practices</h2>
<p>Whichever style you adopt, here are some practices I would encourage.</p>
<ul>
<li>
<p><strong>Sharing your camera</strong> during Scrum is a good thing because it promotes stronger team connections and enhances communication. Visual cues and body language play a significant role in understanding and interpreting messages, fostering a more empathetic and collaborative environment. By enabling team members to see each other, it encourages active participation, engagement, and accountability during the meeting, leading to more effective and productive discussions.</p>
</li>
<li>
<p><strong>Giving each person an opportunity</strong> to speak during Scrum is a good thing because it ensures that all team members have a chance to share their progress and challenges, fostering a sense of inclusivity and ownership within the team. It also prevents the dominance of certain individuals, promoting a more balanced and diverse input during the meeting.</p>
</li>
<li>
<p><strong>Engaging blockers</strong>—it’s essential to prioritize and discuss blockers each scrum. Is something preventing a work item from progress? Discuss the problem and appoint a person to be responsible to track it down!</p>
</li>
<li>
<p><strong>Varying the order</strong> of speakers during Scrum is a good thing because it prevents the meetings from becoming predictable and encourages all team members to stay engaged and attentive. It also ensures that different perspectives are heard, leading to more diverse insights and fostering a collaborative team environment.</p>
</li>
<li>
<p><strong>Setting aside time to talk about life</strong> during a remote team Scrum is beneficial as it fosters stronger connections, builds trust, and promotes a supportive and empathetic work environment. This can be once a week for 10 minutes. It can a prepared topic or informal, like a “how was your weekend?”.</p>
</li>
<li>
<p><strong>Wins, wins, wins!</strong> Delivered a feature? Celebrate it! Call it out and have the team recognize the achievement. A celebration doesn’t have be a pizza party. It can take a minute or less the daily scrum. Pizza parties can still be done, but don’t fit every delivery or accomplishment. Did the team perform well in a debugging session? Did an individual have a great insight? Is it someone’s work anniversary? Build and maintain a team that celebrates their deliveries and its team members!</p>
</li>
</ul>
<h2 id="further-enhancements">Further enhancements</h2>
<ul>
<li>
<p>On the subject of <strong>documentation</strong>, teams have the option of documenting their updates or not, and it’s essential to find the right approach that works for your team. Some teams benefit from detailed documentation, while others prefer a more fluid and verbal approach.</p>
</li>
<li>
<p><strong>Color coordinating and categorizing</strong> the split view based on areas of work (backend, frontend, design, product, etc.) can be beneficial as this visual organization makes it easier for team members to understand the distribution of work and responsibilities. In addition, returning to this page during the week can help members understand their work and its status at a high level.</p>
</li>
<li>
<p><strong>Giving each week a theme</strong> name inside the split, such as Texas Hold ‘em’s betting stages, can make the process more fun and engaging for team members. The betting stage convention can also give an idea of  how mature the Initiatives should be at that time during the split.</p>
</li>
<li>
<p>Finally, it’s a good idea to open the floor for <strong>demos</strong>, allowing teams to showcase their progress and achievements during the sprint. Demos not only provide an opportunity for the team to showcase their work but also encourage collaboration and feedback. Attending a demo of work you’re not directly involved with can also help deepen and broaden the knowledge base of the entire team.</p>
</li>
</ul>
<h2 id="making-scrum-work-for-you">Making scrum work for you</h2>
<p>In conclusion, the key to successful Scrum implementation is to <strong>mix things up</strong> and not shy away from trying new approaches. There are some best practices, like sharing your camera, but beyond that each team is unique. Experiment with different meeting styles, themes, and communication methods: be adventurous! After trying something new for a few meetings, gather feedback from the team to assess whether the change is beneficial and should be kept or not desired and should be abandoned. <strong>Continuously learning</strong> from these experiences will help your team refine its Scrum practices and create a more efficient and productive workflow.</p>
<h2 id="happy-scrumming">Happy Scrumming!</h2>]]></description><pubDate>Fri, 08 Sep 2023 00:00:00 GMT</pubDate></item><item><title><![CDATA[Let’s talk about burnout]]></title><link>https://tech.sagesure.com/blog/2023-06-23-lets-talk-about-burnout/</link><description><![CDATA[<p>In August 2017, I quit.</p>
<p>After four years at SageSure, I backed up from a job that I loved because I fell victim to burnout and left the very company where my career in software began.</p>
<p>SageSure gave me what I felt was my first big break into the software world. It was the kind of job that I had been reaching for since I graduated college: remote work, a handsome salary, incredibly talented coworkers, and doing exciting work. It was a dream come true.  The first few years were fast-paced and exciting and I genuinely loved the work that I was getting to do. A year after coming on board I had assumed the role of Operations Lead with a small team of talented engineers overseeing our Cloud infrastructure, CI/CD systems, and being responsible for monitoring our production stack. As a company, we were accelerating at an amazing pace and smashing records for new business nearly every month. As a result, the frequency of changes and deployments flowing into production was ever-increasing. On one hand, I was watching my career take flight all while learning new tech and automating everything else. On the other hand, the number of backlog tickets, production incidents, outages, and postmortems that I was participating in was ever-increasing.  At times I felt the crunch and hunkered down to get through tight deadlines, incidents, and an endless number of meetings. But still, I loved my job.</p>
<p>At this point, the quiet erosion of my career euphoria, my dream job, was hardly noticeable. There were times when I directly felt the stress of work but I always assumed that you sometimes felt exhausted from work and that was it. I figured if things got really bad I could take a few days off to reset. Another year went by and the story reads the same: more organizational success, more frequent delivery to production, and more fallout from outages, technical debt, and new features. Then, I finally hit a wall. The thing is I didn’t hate my job at this point, I still loved it, but I knew that it was going to crush me if I didn’t leave it behind. I felt powerless and ill-equipped to handle the way that I responded to stress.  I was out of gas and had no more energy left in me to give to my job, so I resigned and slipped away with a bag of emotion dragging behind me.</p>
<p>How did this happen? Why didn’t I see this coming? These were the questions that swirled in my head when I finally had a chance to reflect on the most shocking turn of my young career.</p>
<h2 id="what-exactly-is-burnout">What exactly is burnout?</h2>
<p>Stress manifests in everyone differently and can make identifying and treating job-related stress and burnout a real challenge. Herbert Freudenberger, a German-born American psychologist and psychotherapist, first coined the term “burn out” in 1974 describing it as “a consequence of excessive stress leading to chronic fatigue and lack of enthusiasm”.  His  work defined a 12-stage model for the development of burnout symptoms:</p>
<ul>
<li><strong>The Compulsion to Prove Oneself</strong> - Demonstrating worth obsessively, probably something that’s especially prevalent in tech/software where imposter syndrome can be a factor</li>
<li><strong>Working Harder</strong> - An inability to switch off, working late, working early, working on your days off</li>
<li><strong>Neglecting Needs</strong> - Neglecting your own needs. Poor sleeping habits, disruptive dietary changes, reduced social interaction</li>
<li><strong>Displacement of Conflicts</strong> - Displace the acknowledging that you’re pushing yourself too much and instead you blame your manager, the demands of your job, or coworkers for your stress</li>
<li><strong>No Time for Non-Work</strong> - Revision of Values; where work can become your only focus and your personal life is de-prioritized</li>
<li><strong>Denial</strong> - Impatience with others begins to mount and instead of taking responsibility for your feelings you start to form intolerance and cynical attitudes towards coworkers, clients, and your manager</li>
<li><strong>Withdrawal</strong> - Complete withdrawal from family and friends and your social life</li>
<li><strong>Behavioral Changes</strong> - Obvious behavior changes, things that your friends and family may take notice of. Those that are on the road to burnout may become aggressive or snap at friends and family for no reason</li>
<li><strong>Depersonalization</strong> - A reduced or lack of sense for personal accomplishment, detaching yourself from your work, and feeling like you’re no longer valuable</li>
<li><strong>Inner Emptiness</strong> - Feeling empty and anxious</li>
<li><strong>Depression</strong> - Exhaustion, and feel like the future is dark and bleak</li>
<li><strong>Burnout Syndrome</strong> - Which he describes as a total collapse</li>
</ul>
<p>Many of these are clear sensations that may be easy to spot yet others you may never realize until you’re in the middle of it. In my case burnout first began building early on in my career at SageSure but not in ways that I ever expected or even noticed.</p>
<h2 id="what-i-learned">What I learned</h2>
<ul>
<li>
<p><strong>We often talk about burnout when it’s too late</strong>
It feels a bit edgy to open up about all of this which I think points to one of the inherent issues with burnout in the modern workplace: it’s unnecessarily regarded as taboo. No one really wants to talk about stress and burnout. The thing is,  no one is immune to the symptoms of burnout either, and everyone endures stress in the workplace whether they know it or not.</p>
</li>
<li>
<p><strong>Burnout is as slippery as it is silent</strong>
I never noticed that my burnout was as bad as it was until it was too late. It creeps up on you, ever so slightly, so you don’t come to notice that you’re suffering more and more each day. You generally won’t notice the signs of burnout in yourself because it’s a gradual thing, affecting you little by little over a period of time.</p>
</li>
<li>
<p><strong>I didn’t know how to step away</strong>
I pressured myself into working long hours, working late, and through my lunchtime regularly. In addition, I didn’t take advantage of some of the resources that SageSure provided me, like paid time off. I remember looking on my last day of work I still had 84 vacation hours of unused time.  Because I spent so much time focusing on my work, I didn’t spend enough time paying attention to myself and as a cause, I didn’t recognize my burnout until it was way too late. Instead of stopping to defuse the ticking time bomb I kept pushing forward and let it grow bigger and bigger.</p>
</li>
<li>
<p><strong>There’s a razor-thin edge between loving your job and feeling destroyed by it</strong>
I loved my work, and as such, was more prone to burnout. It’s one of those ironies of having a career. It’s like a complicated love affair: some days it’s exciting and passionate while other days it’s exhausting and emotionally draining. It’s easy to work longer and harder when you’re passionate and invested in your work but, as most things go, it requires careful balance with time away from the job. Without balance, you will most certainly be crushed.</p>
</li>
</ul>
<h2 id="sagesure-round-two">SageSure, round two</h2>
<p>After I burned out I spent the following year and a half freelancing and working on personal projects which gave me the opportunity to reflect on what happened and why it happened. I stayed in touch with SageSure and my friends there and was lucky enough to get the chance to join the organization for a second time with a new perspective on my job and priorities. Better understanding burnout and how job stress affects me has made a significant improvement on my quality of life and the quality of my work.</p>
<h2 id="conclusion">Conclusion</h2>
<p>If there’s any takeaway you get from this post it should be this: talk about burnout. It’s the most important first step anyone and any organization can do. I’m a testament to the reality of work stress and burnout and wanted to share what I know about it, what I went through, and to hopefully get us started on thinking about these things more often.</p>]]></description><pubDate>Fri, 23 Jun 2023 00:00:00 GMT</pubDate></item><item><title><![CDATA[Continuous delivery with the help of Postman]]></title><link>https://tech.sagesure.com/blog/2022-04-01-continuous-delivery-with-the-help-of-postman/</link><description><![CDATA[<h2 id="sagesure-insurance-enables-continuous-feature-delivery-with-postman">SageSure Insurance Enables Continuous Feature Delivery with Postman</h2>
<p>SageSure offers dependable homeowners insurance in catastrophe-prone areas where it’s needed most. We are an innovative insurance company that promises stability, peace of mind, and a commitment to being there for the long haul.</p>
<p>SageSure partners with many highly rated insurance carriers to serve the various needs of our markets. But what makes us stand out is that SageSure actually designs, develops, and files our proprietary homeowners and commercial insurance products ourselves.</p>
<p>Our approach combines the benefits that the carrier, underwriter, and distribution partners bring into one solid effort—and the result is unwavering, long-term support that agents, homeowners, and business owners can depend on.</p>
<h2 id="the-sagesure-software-team-and-postman">The SageSure software team and Postman</h2>
<p>The SageSure software team is focused on improving our development and delivery process. One high-priority process is continuous delivery of new features. This means no service downtime during deployment, and frequent small pushes to the production environment with a high level of confidence. This end-to-end deployment effort is realized with the help of many tools, including the Postman Collection Runner for test validation.</p>
<p>SageSure’s tech stack including the Postman API Platform has resulted in these key benefits:</p>
<p>• approval of merge requests to production in under 60 minutes;</p>
<p>• increased frequency of smaller deployments;</p>
<p>• a high degree of confidence with regression functionality; and</p>
<p>• enhanced reliability of services at the micro level.</p>
<h2 id="common-deployment-concerns">Common deployment concerns</h2>
<p>The questions our team assessed while looking to improve our process include: How do you get the code that is adding a feature or fixing a bug into a live environment? What team members are involved in the push? At what time does the push take place? Do you have a high degree of confidence that this new feature/fix will not break any current functionality? Does deploying cause any downtime for the user?</p>
<p>Companies have varying answers to these questions. As one example, some companies do quarterly deployments. But this typically leads to mountains of code for that release date and may mean that some projects are shipped incomplete because they needed to meet the deploy window but weren’t actually ready to ship. Here, some bugs get resolved, but other bugs are created due to the massive volume of code shipped and all the moving parts involved. Then, if a bug isn’t deemed important enough for a hotfix of some sort, that bug has to wait for the next quarterly deployment. This process often means more pains to address—pains for team as well as for the user.</p>
<p>While we never had quarterly deployments, we were working with a cumbersome deployment process. As part of our ethos of customer-centricity, continuous improvement, and the assessment I mentioned above, we arrived at the following approach.</p>
<h2 id="the-sagesure-solution">The SageSure solution</h2>
<p>Once a code change is approved through a merge request, many different test suites are run and, when they pass, the code is pushed to production without any service downtime. Deployments happen frequently—as often as code is merged; it’s not unusual to have three or more production deployments per hour. These deployments could be in different areas of our software: a UI enhancement here, a backend integration there, a bug squashed over there.</p>
<h2 id="testing-testing-testing">Testing testing testing</h2>
<p>The beauty of frequent small deployments is that each package is validated through each level of testing. We have automated unit tests, automated helm tests, automated UI tests, and automated Postman tests. Each suite verifies the new functionality, and also verifies no previous functionality has been altered or removed unintentionally. Pushing with confidence!</p>
<p>Each test suite is designed to test specific aspects of our software product. Unit tests confirm the behavior of the smallest testable parts of our code. Helm tests verify the server spun up correctly. Postman tests verify API functionality, both internally and with external partners. UI tests verify E2E paths are on their best behavior. The test after the server has launched is not yet accessible by live users; it is spun up in preview mode. Only after each test suite passes, is the new server then given traffic and the old server is spun down and no longer given live users.</p>
<h2 id="the-postman-test-suite">The Postman test suite</h2>
<p>This specific test suite attacks a unique problem: external and internal dependencies. With a validation suite collection in Postman, we can verify services are connected to an application in each environment as the pipeline progresses: database connection, internal DNS address routing, authentication, security, application to application connectivity, and third-party integration verification—to name a few. These tests are vital to ensure the application is up and able to communicate with other internal and external dependencies required to perform accurately. These tests are also designed to swap out environmental conditions and values. In a lower environment, we can verify connectivity to a test third-party endpoint, while the upper environment will verify the corresponding endpoint.</p>
<h2 id="going-forward">Going forward</h2>
<p>SageSure has eliminated a tremendous amount of manual, repetitious work by investing in an advanced deployment pipeline. In doing so we’ve also managed to deliver with greater confidence for ourselves and in the experience of the user: no down time, and less bugs.</p>
<p>If a bug does get through, our process is to write a test that fails in the area where the bug exists. Is it a backend issue? Write a Unit test. Is it an E2E mismatch? Write an automated UI test. After the test suite has been updated to look for this case, we fix the bug and run the pipeline. Now, because of our investment in the test suites, we can be sure that:</p>
<ol>
<li>
<p>we fixed the issue;</p>
</li>
<li>
<p>we did not introduce a different bug; and</p>
</li>
<li>
<p>this bug will not happen again in the future.</p>
</li>
</ol>
<p>With this approach, we’ve been able to apply fixes to production issues in less than 60 minutes after they were discovered.</p>
<p>In short, this continual investment in our pipeline validation gives us more confidence with each iteration—more confidence, and a better experience for our users.</p>
<h2 id="delivering-with-confidence">Delivering with confidence</h2>
<p>Continuous delivery requires great confidence. Once achieved, it improves the quality of life for members of our development team. More regular working hours. Less stress on deployments. Less red tape. More backyard BBQs and weekend water park adventures. Getting here was more than worth it, and we have our excellent team to thank—as well as Postman.</p>
<p>This article has also been published <a href="https://blog.postman.com/sagesure-insurance-continuous-feature-delivery-with-postman/">here</a>.</p>]]></description><pubDate>Fri, 01 Apr 2022 00:00:00 GMT</pubDate></item><item><title><![CDATA[Three products, one design system]]></title><link>https://tech.sagesure.com/blog/2022-02-15-three-products-one-design-system/</link><description><![CDATA[<p>Until recently, SageSure’s three flagship products—a policyholder portal, an agent portal, and a servicing application—appeared to have adequate visual similarities tying them together despite differences in code.</p>
<p>However, once SageSure’s design system initiative began in earnest and all three were closely evaluated, it was clear the products deviated from each other in a number of important ways, leading to inefficiency and confusion.</p>
<h2 id="a-shared-philosophy">A shared philosophy</h2>
<p>SageSure’s digital team began to realize that even though the three products had unique purposes, they shared enough in common that unifying and componentizing their building blocks would increase team velocity as well as reduce ambiguity during the transition from design to engineering. So we set about creating a system based on atomic design principles that would provide our teams with a common vocabulary and help them build features with a shared philosophy.</p>
<p>With three unique experiences to consider, we had to ask ourselves: can these apps in fact be unified in both design and code? Should they? What are the advantages? Is it a good use of our time? We began to <a href="https://tech.sagesure.com/blog/2021-11-30-design-systems-and-trade-offs/">explore the trade-offs</a> involved.</p>
<p>One of the conclusions was that, even though our current products have three siloed audiences that would unlikely experience the unification effort, SageSure’s designers and engineers would benefit greatly and a unifying system would help validate our design principles at scale. Additionally, the workflows of our traditionally-siloed audiences are increasingly overlapping, necessitating a system to help deliver a coherent, unified experience to all our customers.</p>
<h2 id="piecing-it-all-together">Piecing it all together</h2>
<p>The first step was doing an audit of the three products and making note of all the visual similarities and differences. Focusing first on simple foundational elements such as color, spacing, and typography, we explored the possibilities and ultimately found middle-ground solutions that would work in all three contexts without causing disruption to the existing UI.</p>
<p>That process also helped us codify simple guidelines (ex: use the <code data-astro-raw>fill/active</code> color token for navigation states) so that designers would always understand what to use and when, regardless of the product.</p>
<figure>
  <img src="/assets/blog/2022-02-15-ipcm.png" width="2142" height="1153" alt="Screenshot of an application with an overlay of lines indicating colors on screen.">
  <figcaption>Color auditing process to enumerate and identify existing color usage.</figcaption>
</figure>
<p>From there we designed more complex pieces (ex: buttons, dropdown menus) that had easily identifiable purposes while also being visually broad enough to work seamlessly in all three products. One at a time, we staged them in Figma and discussed their variants and states along the way. Each one was then coded, added into Storybook, QA’d, and released to the teams to begin integrating them.</p>
<p>Bit by bit, we tied our products together using Storybook as a centralized source of truth. The process also helped us define a contribution model for creating new components.</p>
<h2 id="a-holistic-product">A holistic product</h2>
<p>One important note: until recently, many companies have defined “design system” as a simple UI kit for designers to prototype features. At SageSure, we consider the beginnings of our system to be a holistic, overarching product with three crucial touch points—a UI kit in Figma, a Storybook site where components are consumed, and <a href="https://zeroheight.com/8a5dd75a1/p/469f06-sagesure-design-system">a documentation site</a> for guidance around usage (as well as serving as a great onboarding tool to help new employees understand the principles upon which we built the system).</p>
<figure>
  <img src="/assets/blog/2022-02-15-figma.png" width="2248" height="1016" alt="Screenshot of design system components in Figma, including colors, icons, spacing, and typography.">
  <figcaption>Component library in Figma.</figcaption>
</figure>
<figure>
  <img src="/assets/blog/2022-02-15-storybook.png" width="1519" height="814" alt="Screenshot of a button component in Storybook.">
  <figcaption>Component library in Storybook.</figcaption>
</figure>
<figure>
  <img src="/assets/blog/2022-02-15-zeroheight.png" width="1268" height="805" alt="Screenshot of the SageSure design system documentation in Zeroheight.">
  <figcaption>Design system documentation in Zeroheight.</figcaption>
</figure>
<p>Building a design system for multiple products can seem daunting and there are challenges to keeping everything aligned. But purposeful, well-defined global styles and proper <a href="https://spectrum.adobe.com/page/design-tokens/">tokenization</a> glueing things together has helped our teams build applications with increasing confidence and clarity.</p>]]></description><pubDate>Tue, 15 Feb 2022 00:00:00 GMT</pubDate></item><item><title><![CDATA[Retaining tech talent]]></title><link>https://tech.sagesure.com/blog/2022-01-26-retaining-tech-talent/</link><description><![CDATA[<p>I was recently a guest on the <a href="https://postlight.com/podcast/retaining-talent-tim-meaney-returns">Postlight Podcast</a> talking about retaining technical talent. While the difficulty of filling the many open technical positions at most companies is widely known and discussed, retention is often overlooked. And maximizing the length of stay for the members of your technical team is the key to maximizing delivery—seasoned team members ship code. And churn is a cascading setback.</p>
<p>I shared what I view as the four key elements that drive retention that you should be actively considering if you‘re in a leadership position on a technical team:</p>
<ol>
<li>Connectedness. Ensure a connection for each team member—to the company, to the strategy, to their manager, to their work, and especially to their team.</li>
<li>Agency. Cultivate the voice for tech talent—for their work, their career progression, their tooling, and the processes used by the team. And listen to this voice.</li>
<li>Growth. Ensure your technical team members have opportunities to grow and the support to make that happen.</li>
<li>Modernity. Top tech talent wants to work on modern tech with modern processes and practices and be able to be current with the goings on in their discipline as well as industry. Is yours modern?</li>
</ol>
<blockquote class="twitter-tweet"><p lang="en" dir="ltr">.<a href="https://twitter.com/timothymeaney?ref_src=twsrc%5Etfw">@timothymeaney</a> explains how fostering connectedness can be powerful strategy to retain talent: <a href="https://t.co/MZaAB7KczO">https://t.co/MZaAB7KczO</a> <a href="https://t.co/dYRyia07UJ">pic.twitter.com/dYRyia07UJ</a></p>&mdash; Postlight (@Postlight) <a href="https://twitter.com/Postlight/status/1479150924232892427?ref_src=twsrc%5Etfw">January 6, 2022</a></blockquote> <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>]]></description><pubDate>Wed, 26 Jan 2022 00:00:00 GMT</pubDate></item><item><title><![CDATA[Design systems and trade-offs]]></title><link>https://tech.sagesure.com/blog/2021-11-30-design-systems-and-trade-offs/</link><description><![CDATA[<blockquote>
<p>Effective Product Design is a cascading series of trade-offs. There is no right, no wrong, there’s only trade-offs. – <em>Tim Meaney, VP of Software at SageSure</em></p>
</blockquote>
<p>At SageSure, our software is not the product we offer our customers. It’s a means to an end, a tool our customers use to gain access to our core product: insurance built on continuous capacity and long-term financial stability.</p>
<p><em>That</em> is what we aim to deliver our customers. It just so happens that software makes that happen.</p>
<p>The faster we can build quality, effective software, the faster we can deliver to our customers.</p>
<h2 id="one-trade-off-useful-software-over-creative-expression">One trade-off: useful software over creative expression</h2>
<p>Over the last few years the software team at SageSure had developed several threads of systematic design—style guides, pattern libraries, and components—which have together constituted the building blocks of our varied digital products.</p>
<p>These threads of systematic design were loosely defined and controlled, in turn granting broad freedom and creative interpretation to individual team members. And while the discrete threads were consistent in themselves, the assembled whole often was not.</p>
<p>But given the nature of our organization—its size, structure, and resources—the trade-off of consistency around narrow sets of patterns at the expense of the whole, and individual creativity at the expense of cohesion and team velocity, worked.</p>
<p>Until it didn’t.</p>
<p>We had freedom and a sense of creativity in assembling the building blocks, but our teams felt an ever-growing desire to relinquish that kind of freedom in exchange for increased clarity and velocity. <strong>We wanted to ship more useful, quality software to our customers over being more individually expressive.</strong></p>
<p>This was one of the key trade-offs we documented in assessing our needs to build a design system.</p>
<p>Resources are finite for our team. We wanted to leverage this constraint as an advantage, allowing necessity and creative constraints to be the mother of invention.</p>
<h2 id="trade-offs">Trade-offs</h2>
<p>The word “trade-off” can carry a negative connotation in software. Rich Hickey points this out in his talk <a href="https://github.com/matthiasn/talk-transcripts/blob/master/Hickey_Rich/HammockDrivenDev-mostly-text.md">“Hammock Driven Development”</a>:</p>
<blockquote>
<p>Everybody says design is about tradeoffs. Everybody knows this. Usually when they talk about tradeoffs in their software, they are talking about the parts of their software that suck. “I had to make these tradeoffs.” That is not what a tradeoff is, right?</p>
<p>You have to look at, at least, two solutions to your problem. At least two. And you have to figure out what is good and bad about those things before you can say, “I made a tradeoff.” So I really recommend that you do that. And when you do it, you might want to write that down somewhere.</p>
</blockquote>
<p>Given a small team and a small organization, we can’t create a system like <a href="https://material.io/design">Material Design</a> from Google or <a href="https://www.lightningdesignsystem.com/">Lightning</a> from Salesforce.</p>
<p>Then again, why would we want to? Their design systems are solving an entirely different set of organizational problems than we are.</p>
<p>This is why it is so critical to pinpoint what you want to accomplish and then, per Rich’s advice, document the trade-offs you’re willing to make to get there—because there will be some!</p>
<p>Understanding the trade-offs you’re willing to make helps guide your decision making and reveals your priorities. It’s similar to <a href="https://adactio.com/articles/17733">principles where you state X <em>over</em> Y</a>, but with a trade-off it can be X <em>instead</em> of Y. If you don’t acknowledge that limitation, that trade-off, you’ll mistakenly think you can have both. Only later will you realize the universe isn’t so generous.</p>
<h2 id="surface-and-document-your-trade-offs-now">Surface and document your trade-offs now</h2>
<p>Consider again the quote that began this article from Tim. I’m going to swap out “Product Design” for “design systems” and I think it still holds:</p>
<blockquote>
<p>[a design system] is a cascading series of trade-offs. There is no right, no wrong, there’s only trade-offs.</p>
</blockquote>
<p>The faster you understand the needs of your organization and the outcomes you want in service of your customers, the faster you can state, document, and make the inevitable trade-offs that come with creating a design system.</p>
<p>You’ll have to make trade-offs in creating a design system sooner or later. If you make them later, you’ll suffer from them longer. Better to document and make them sooner.</p>]]></description><pubDate>Tue, 30 Nov 2021 00:00:00 GMT</pubDate></item><item><title><![CDATA[Evolving our static site architecture]]></title><link>https://tech.sagesure.com/blog/2021-09-20-evolving-static-site-architecture/</link><description><![CDATA[<h2 id="where-we-started">Where we started</h2>
<p>Approximately six years ago, all of our services were hosted on Windows virtual machines in a data center with Microsoft’s Internet Information Services (IIS) and Apache. Deployments were performed manually by operations engineers and involved copying files received from developers to multiple VMs. When I joined SageSure in 2015, the DevOps team was already working on a project to dockerize all services and automate the deployments. After many months, and a few iterations on the architecture, we ended up with a more modern setup: dockerized services running on linux hosts deployed using automation from Jenkins.</p>
<p>This setup provided us a number of benefits:</p>
<ol>
<li>Deployments were automated, reducing time and errors associated with deployment.</li>
<li>Build artifacts were immutable and generated with each release. Every commit to a releasable branch generated a docker image that could be deployed to any environment for testing.</li>
<li>Scaling was easy. Instead of provisioning a new vm, we just needed to click a ’+’ button in Rancher, our Docker orchestration UI.</li>
</ol>
<p>And this worked great for us. During this time, we were also migrating our services from the data center into AWS. The new dockerized setup made the move a lot easier.</p>
<h3 id="lingering-pain-points">Lingering pain points</h3>
<p>But as the years have gone by, even as we enjoyed the benefits of our new architecture, we noticed a few weaknesses, particularly for our websites. Our routing layer relied on polling internal DNS to resolve the container IPs. As a result, every deployment had a small window (a few seconds, usually) where requests could fail if the old containers were destroyed before the proxy found the new containers.</p>
<p>Additionally, our frontend teams asked for the ability to preview a build for each feature branch before merging into the mainline. Thanks to our Docker architecture, we were able to add this functionality pretty easily by deploying the dockerized build and leveraging wildcard subdomains in DNS. However, this meant that we were now running dozens of copies of our frontend apps at the same time in our integration environment. Even with reduced memory reservations, we frequently ran out of capacity and had to scale up the cluster, or manually review and delete a large number of stale feature branch builds.</p>
<p>Lastly, running the websites within our own Docker infrastructure and behind our own proxies meant that any downtime or CPU contention in the cluster had a very visible impact to our users. Also, we weren’t doing anything explicit to manage the caching of website assets, so we would often have to instruct users to clear their browser cache if they were having issues after a deployment.</p>
<p>So it was time to evolve our architecture once again.</p>
<h2 id="implementing-a-static-site-architecture">Implementing a static site architecture</h2>
<p>Our new architecture is based on <a href="https://d0.awsstatic.com/whitepapers/Building%20Static%20Websites%20on%20AWS.pdf">AWS’s whitepaper for hosting static websites in AWS</a></p>
<p>In summary, the architecture consists of:</p>
<ul>
<li>An AWS S3 bucket to hold variations of the website (to support blue/green and preview deployments)</li>
<li>AWS CloudFront distributions to serve as a CDN and cache-management solution</li>
<li>AWS Application Load Balancer (ALB) to serve traffic to CDN or API</li>
<li>AWS Web Application Firewall (WAF) to provide Geo/IP security restrictions to the CloudFront content</li>
<li>A lightweight reverse proxy for API routes</li>
</ul>
<h3 id="routing-requests-to-the-apis">Routing requests to the APIs</h3>
<p>One of the key factors to consider in the new design was how we would route requests made by the website to a backend API. For all of our websites, the backend APIs were paths on the website domain (e.g. <a href="https://app.example.com/api">https://app.example.com/api</a>) instead of on a dedicated API subdomain (e.g. <a href="https://api.example.com">https://api.example.com</a>). Changing this would have ballooned the scope of the project, so we were left with handling the routes on the same domain. In practice, that means a request to the website domain could be intended for a backend or for the website itself, depending on the path. We used a lightweight reverse proxy to direct API-bound traffic to the appropriate backend service and route all remaining traffic to the webserver.</p>
<p>Moving forward, we had a few options for how we would split traffic between the CDN (taking over the job of serving the website assets) and the API backends:</p>
<ol>
<li>Put our reverse proxy inline, in front of the CDN. This would mean our proxy instance and docker infrastructure are still in the data path, but requires the least amount of change.</li>
</ol>
<figure>
  <img src="/assets/blog/2021-09-20-inline.png" width="1060" height="405" alt="Diagram of an approach where the inline reverse proxy is in front of the CDN.">
</figure>
<ol start="2">
<li>Split traffic at the ALB layer using path-based routing. Ideally, we could collapse all backend routes to a common /api/ base to ease the routing configuration.</li>
</ol>
<figure>
  <img src="/assets/blog/2021-09-20-api.png" width="860" height="405" alt="Diagram of the approach where traffic is split at the ALB layer with path-based routing.">
</figure>
<p>In the end, we decided to think of these as phases instead of competing options. We were able to deploy the first option quickly with minimal changes by leaving the API routing untouched. Once our websites are all migrated, we plan to implement option 2 and update the API routing to remove our proxy as a dependency of accessing the website.</p>
<h2 id="deploy-process">Deploy process</h2>
<p>The updated deployment process now works as follows:</p>
<ol>
<li>For each deployable build, the pipeline job compiles the static site and saves the bundle as an artifact.</li>
<li>The bundle is synced to the environment s3 bucket under a versioned folder.</li>
<li>For stage preview environments, the reverse proxy will take care of routing to the correct folder based on subdomain (No CDN is used for stage previews).</li>
<li>For production preview, a post-deployment script will update the preview CloudFront origin to point at the new folder and perform cache invalidation, as required.</li>
<li>Once the prod preview has been accepted, a second job will be used to update the production CloudFront origin to the new folder and perform cache invalidation, as required.</li>
</ol>
<p>Notes:</p>
<ul>
<li>The active distribution files are never directly modified during deployment. This keeps each build immutable and available as a target for fast rollback.</li>
<li>An alternative would be to use stable CloudFront origins (in blue/green style) and use either:
<ul>
<li>DNS to switch between the active distribution. This, however, would subject users to DNS propagation delays.</li>
<li>Weighted routing to use a single DNS entry against a blue/green CloudFront distribution. This would require maintaining blue-green state in the deployment script and preview routing to ensure the correct version is updated and enabled.</li>
</ul>
</li>
</ul>
<h2 id="conclusion">Conclusion</h2>
<p>We migrated a major internal website to this new architecture late last year and it has been running in production without issues for the past 9 months. Our internal team has enjoyed the fast updates, the flexibility to deploy more often without user impact, and the increased number of available preview environments. We’re in the process of migrating our public websites to the new architecture so those teams can enjoy the same benefits.</p>]]></description><pubDate>Mon, 20 Sep 2021 00:00:00 GMT</pubDate></item><item><title><![CDATA[Our path to CICD]]></title><link>https://tech.sagesure.com/blog/2021-09-13-our-path-to-cicd/</link><description><![CDATA[<p>At SageSure, we are undertaking an initiative to migrate our projects to a full CICD pipeline. After code is merged to a main branch, it will run through a set of fully automated tests and, if all pass, be deployed to production. Automatically and quickly.</p>
<p>Our CICD goal: All Merge Requests should be capable of being released in less than 60 minutes.​</p>
<p>This post is about why we want to move to CICD, the challenges we need to overcome, and how we will know we’re succeeding.</p>
<h2 id="current-state">Current state</h2>
<p>Currently we deploy to production about once a week for most projects. More frequently for our backend services, less frequently for our frontend apps. Our deployment frequency and lead time metrics both reflect this:</p>
<img src="/assets/blog/2021-09-13-deployment-frequency.jpg" width="833" height="886" alt="Screenshot of various graphs showing the deployment frequency of applications SageSure.">
<img src="/assets/blog/2021-09-13-lead-time.jpg" width="942" height="943" alt="Screenshot of various graphs showing the lead time to deploy of applications SageSure.">
<h2 id="why-move-to-cicd">Why move to CICD?</h2>
<p>There are 3 main reasons for us switching to CICD:</p>
<ul>
<li>Faster feedback</li>
<li>Reduce the cost and risk of release​</li>
<li>Create a better place to work!</li>
</ul>
<h2 id="faster-feedback">Faster feedback</h2>
<p>Are we we delivering the right thing to our customers? In his <a href="https://www.youtube.com/watch?v=skLJuksCRTw">2012 Continuous Delivery</a> talk, Jez Humble pointed to <a href="https://www.standishgroup.com/sample_research_files/Modernization.pdf#page=15">a study by the Standish Group</a> that looked at the features used in applications, which found that</p>
<ul>
<li>20% of features are often used</li>
<li>30% of features get used sometimes or infrequently​</li>
<li>50% of features are hardly ever or never used</li>
</ul>
<figure>
  <img src="/assets/blog/2021-09-13-standish-group.jpg" width="353" height="284" alt="Screenshot of a pie chart from the Standish Group showing an estimate of features used in custom applications">
  <figcaption>Source: <a href="https://www.standishgroup.com/sample_research_files/Modernization.pdf#page=15">“Modernization” by the Standish Group</a></figcaption>
</figure>
<p>Half of the features that get developed being rarely ever used is a shocking statistic. Writing software that our customers don’t even want or use is a huge waste​.</p>
<p>How do we reduce that waste? Lean principles suggest that we figure out how to build the smallest thing that would allow us to validate a hypothesis, and optimize for a build-measure-learn feedback loop. Listen, hypothesize, build, get feedback, iterate and repeat.</p>
<figure>
  <img src="/assets/blog/2021-09-13-lean-startup.jpg" width="600" height="549" alt="Lean startup flow chart">
  <figcaption>Source: <a href="http://theleanstartup.com/book">Eric Ries – The Lean Startup</a></figcaption>
</figure>
<p>We want to optimize our software delivery process for time around this loop​. And Lead Time​ is a key component of this cycle.</p>
<p>If you are not familiar with Lead Time, the <a href="https://www.amazon.com/Accelerate-Software-Performing-Technology-Organizations/dp/1942788339">Accelerate</a> book defines Lead Time as:</p>
<blockquote>
<p>“the time it takes to go from code committed to code successfully running in production.”</p>
</blockquote>
<p>and states that:</p>
<blockquote>
<p>“shorter product delivery lead times are better since they enable faster feedback on what we are building and allow us to course correct more rapidly.”</p>
</blockquote>
<p>That is the first reason to move to CICD​: to speed up our feedback loop​ and gain confidence, through quick iteration and experimentation, that we are in fact building the right thing​.</p>
<h2 id="reducing-the-cost-and-risk-of-release">Reducing the cost and risk of release​</h2>
<h3 id="the-cost">The cost</h3>
<p>The problem with our current weekly deployments is that the cadence incurs very significant cost. I have previously written on my personal blog about <a href="https://www.shaunabram.com/how-much-is-your-slow-lead-time-costing-you/">the cost of slow lead times</a>. On a team of 10 engineers, the costs associated with a one week lead time could be the approximate equivalent of more than 3 engineers, or $400,000 per year. That is a huge cost.</p>
<p>By decreasing the time it takes us to generate revenue from our new features, reducing the costs of necessary work such as coordinating and manual releases, and by reducing work in progress and context switching, we expect to significantly reduce our cost to release.</p>
<h3 id="the-risk">The risk</h3>
<p>The bigger a release, the more changes being released at once​. That means there are more chances of something failing​, and when something does fail, it’s harder it is to know which change caused the problem and so it tales longer to triage the problem.</p>
<p>​With CICD, each release is smaller. When something goes wrong, it is much easier to understand what caused. And of course having a CICD pipeline means you should be able to roll out a fix much faster too.</p>
<h2 id="creating-a-better-place-to-work">Creating a better place to work!</h2>
<p>The last but by no means least reason for us to move to CICD is for our team.</p>
<p>Right now, some of our releases need to be done outside of working hours (typically after 8pm EST), and require the team to stay late. By moving to a CICD model, we can release anytime, including during working hours, and let the team go home at a reasonable hour. Enabling no-downtime releases is something we can do without moving to full CICD, but it is one of the many improvements that we are doing under the CICD umbrella.</p>
<p>And engineers want to work in a CICD environment. They want to see their changes have an impact quickly. They want to get the feedback of running their code in production. They want to work somewhere where they are working on interesting problems rather than repeatedly running the same manual processes.</p>
<h2 id="our-path-to-cicd">Our path to CICD</h2>
<p>In the excellent <a href="https://www.amazon.com/Continuous-Delivery-Deployment-Automation-Addison-Wesley/dp/0321601912">Continuous Delivery</a> book, they talk about some of the practices and principles of continuous delivery, including developing a culture of continuous improvement, building quality in​, automating where you can, and working in small batches​. We are striving for all those things in SageSure, with more work to do in some of the areas than others.</p>
<p>But there are also specific technical challenges we face too. We talked earlier about the need to enable anytime deployments that do not impact customers. For us, this specifically includes migrating to CloudFront for our frontend services, and migrating to Kubernetes (from Rancher) for our backend services. Our Director of Engineering, Chris Lunsford, recently wrote about our Static-Site Architecture <a href="https://tech.sagesure.com/blog/2021-09-20-evolving-static-site-architecture/">here</a>.</p>
<p>We also need an increased focus on our tests. In a CICD world, a passing build should signal readiness for production. In addition to using automated test suites, we have also allowed our release candidates to “bake” in a stage environment, allowing internal users, tests and services to interact and (hopefully) smoke out any issues with a release. This can be an expensive and time consuming approach and we want to be faster and more consistent. Going forward, we want better automated test coverage and less manual. We need comprehensive unit, integration tests and browser based tests. We also in particular need to step up our API tests e.g., using Postman.</p>
<p>And with good test coverage in place, we need to make sure that we are running all the tests we can in production, for post deployment validation.</p>
<p>Finally, after deployment, we need better monitoring, observability and alerting. What does success of a new feature look like and are there specific metrics we can monitor for? Can we automatically detect unhealthy for a new feature, and automatically rollback? Can we optimize for Mean Time To Recover?</p>
<p>All of these are challenging tasks, but we are already making incremental improvements, and we don’t need to solve them all before moving to CICD. While better test coverage is a good way to reduce risk with CICD, more frequent releases themselves also reduce risk. Baby steps. Improve. Iterate.</p>
<h2 id="how-will-we-know-if-were-succeeding">How will we know if we’re succeeding?​</h2>
<p>One of the books we love at SageSure is <a href="https://www.amazon.com/dp/B07B9F83WM">Accelerate</a>. We covered it in one of our bookclubs and we refer to it frequently. One of the reasons we like the book so much is that it brought a huge amount of data, rigor &#x26; scientific analysis to the table​. The authors found a way to define &#x26; measure the performance​ of software teams, using these 4 metrics:</p>
<p>Throughput metrics​</p>
<ul>
<li>Lead time ​</li>
<li>Release Frequency​</li>
</ul>
<p>Stability metrics:​</p>
<ul>
<li>Time to restore service​ (aka MTTR)</li>
<li>Change Failure rate​</li>
</ul>
<p>As well as finding a way to measure performance, they found a way to predict it. Specifically, they found that the practices of Continuous Delivery predict the high performance of software teams. And what’s more, they found that high performers have practices &#x26; principles that allow them to achieve both higher throughput and stability.​</p>
<p>For us in SageSure engineering, these are the metrics we are using as a guide in our migration to CICD.</p>
<p>We think about the the throughput metrics a lot and aiming for smaller batch sizes in the form of lower lead time and high release frequency.</p>
<p>And it is the stability metrics that we use as our guard rails along the way. If move to CICD, and we see an increase in our change failure rate, we are doing something wrong​. And for each failed release, if it is taking longer to restore than what we are currently doing, we are again doing something wrong​. In either case, we would need to step back and take stock.​
​
Gathering these metrics in advance has been very useful, and going into this transformation, we know​:</p>
<ul>
<li>Where we’re starting from in terms of deploy frequency and lead times​</li>
<li>Where we want to go to: commit to production in an hour​</li>
<li>The metrics that show us if we’re breaking too many things along the way​ (MTTR and Change Failure rate​)</li>
</ul>
<p>​So, to answer our initial question, how will we know if we’re succeeding with CICD? We will know when our metrics tell us. When we have improved throughput metrics​ in terms of lower lead time ​ and increased release frequency​, while not negatively impacting our stability metrics. Indeed it would be great to see our time to restore service​ (MTTR) and change Failure rate​ actually improve.</p>
<p>But more importantly, we will also see more and faster experimentation and iteration on features that our customers actually want and use.</p>
<h2 id="timelines">Timelines</h2>
<p>We are already making good progress and expect to have our first service on a CICD model in a few weeks, with the aim to have a significant number of services and UI apps on CICD by year end.</p>
<p>Will we migrate all projects to CICD? We haven’t decided yet. Instead, we plan to role out incrementally, review the benefits (and costs) and decide then.</p>
<p>We will be sure to create another post detailing or successes and failings as soon as we have meaningful progress to report.</p>
<p>In the meantime, if you are interested in joining us on our journey, please check out our <a href="https://www.sagesure.com/careers/">openings</a> and don’t hesitate to reach out.</p>]]></description><pubDate>Mon, 13 Sep 2021 00:00:00 GMT</pubDate></item><item><title><![CDATA[Hello world]]></title><link>https://tech.sagesure.com/blog/2021-09-06-hello-world/</link><description><![CDATA[<p>In the world of programming, the first thing you write is a <a href="http://helloworldcollection.de/">“Hello, World!” program</a>.</p>
<p>In the world of blogging, the first thing you write is a post detailing the technology that powers your blog—then you never post again 😆.</p>
<p>In the spirit of keeping with tradition, let’s take a look at what powers this website.</p>
<h2 id="the-tech">The tech</h2>
<p>For this site, we took a JAMStack approach: markdown, templates, and data go through a static site generator and end up as static files deployed to a URL.</p>
<p>The site is entirely under version control. Post authors, site developers, and other contributors all use a Git-based workflow (like many other projects at SageSure). On push, our Gitlab instance has CI/CD to run a build and deploy the site to an s3 bucket. Changes pushed to <code data-astro-raw>main</code> go straight to prod, while preview deploys are created at unique URLs for each branch.</p>
<p>Now, to answer the question everyone is asking: what static site generator are you using? We decided to give <a href="https://astro.build/">Astro</a> a shot. It’s the new kid on the block that merges a variety of ideas and approaches pioneered by the other popular <a href="https://jamstack.org/generators/">static site generators</a> and front-end frameworks.</p>
<p>What’s compelling about Astro is that it takes an approach of “no JavaScript shipped to the client” by default. In a way, it’s all the benefits of a modern front-end developer experience without passing on a lot of the cost of that experience to the client.</p>
<h2 id="astro-first-impressions">Astro: first impressions</h2>
<p>It’s worth noting that, at the time of this writing, Astro is still very much in the early stages—not yet to v1. And with their own custom file format (<code data-astro-raw>.astro</code>) we’re still waiting for <a href="https://github.com/snowpackjs/astro/issues?q=is%3Aissue+prettier+is%3Aopen">prettier support</a> to get the ever-so-nice “format on save” functionality.</p>
<p>However, <a href="https://twitter.com/jimniels/status/1412813156917944333?s=20">as I found</a>, they do have a discord with friendly folks providing the most up-to-date info you could want. It’s a <a href="https://discord.com/channels/830184174198718474/830184175176122389/874375308613148694">wholesome</a> Github and Discord community—an incredibly valuable asset for a young project with lots of kinks yet to be ironed out.</p>
<p>For me personally, my experience with Astro thus far pretty closely mirrors what Robin Rendle described in <a href="https://www.robinrendle.com/notes/2021-08-11-redesign-everything-broke/">his post</a> detailing his attempt to switch from Eleventy to Astro. First, things are changing beneath your feet:</p>
<blockquote>
<p>A ton of breaking changes had been made since I updated the version of Astro I was on.</p>
<p>And I only started this project a few weeks ago!</p>
</blockquote>
<p>Through the course of building this website alone, I went from <code data-astro-raw>astro@0.16</code> to <code data-astro-raw>astro@0.20</code>. Fortunately, I wasn’t doing anything sophisticated enough to merit breakage (at least that I know of 😬). In fact, bumping versions actually fixed a few minor bugs I’d noticed and meant to investigate but the Astro folks got to them first fortunately!</p>
<p>With that in mind, set your expectations accordingly! If you’re mad about APIs shifting under your feet, you probably shouldn’t be building anything on software versioned at <code data-astro-raw>0.x.x</code>. However, to be sympathetic, it’s one thing to know things are going to be changing and set your expectations accordingly, and it’s quite another to actually experience the frustration of it and realize “maybe I wasn’t as prepared for this as I thought!”</p>
<h2 id="astro-glass-half-full-perspective">Astro: glass half full perspective</h2>
<p>Chris Coyier writes about why <a href="https://css-tricks.com/astro/">he’s bullish on Astro</a>, boiling it down to two major advantages:</p>
<ol>
<li>The JAMstack approach (static HTML/CSS/JS with a no-JS by default approach)</li>
<li>Componentization</li>
</ol>
<p>Regardless of its status as beta software, both of these advantages shine in Astro right out of the gate. As Chris stated, Astro has a vibe of “we’re gonna steal every last good idea we can from what came before, and lean on what the native web does best”.</p>
<p>If you’re worried about adopting something so earlier, don’t. Basic static use cases like a blog could be ported to another static site generator without too much hassle—its merely templates, markdown files, and static assets.</p>
<h2 id="one-last-thing">One last thing…</h2>
<p>Don’t miss <a href="/blog/feed.xml">our RSS feed</a> for this blog—why would you read a blog any other way?</p>]]></description><pubDate>Mon, 06 Sep 2021 00:00:00 GMT</pubDate></item></channel></rss>