<?xml version="1.0" encoding="utf-8"?>
<feed xmlns="http://www.w3.org/2005/Atom">
    <id>https://codeinput.com/</id>
    <title>CodeInput Blog</title>
    <updated>2026-04-06T13:57:02.319Z</updated>
    <generator>Feed for Node.js</generator>
    <author>
        <name>CodeInput</name>
        <email>contact@codeinput.com</email>
        <uri>https://codeinput.com/</uri>
    </author>
    <link rel="alternate" href="https://codeinput.com/"/>
    <link rel="self" href="https://codeinput.com//atom.xml"/>
    <subtitle>The latest updates, guides, and insights from CodeInput.</subtitle>
    <icon>https://codeinput.com//favicon.ico</icon>
    <rights>All rights reserved 2026, CodeInput</rights>
    <entry>
        <title type="html"><![CDATA[Git Merge Conflicts: Understanding Ours, Theirs, and Base]]></title>
        <id>https://codeinput.com//blog/git-conflict-revisions</id>
        <link href="https://codeinput.com//blog/git-conflict-revisions"/>
        <updated>2026-04-06T00:00:00.000Z</updated>
        <summary type="html"><![CDATA[Merge conflicts are less complex once you understand the three versions git is comparing: ours, theirs, and base. This article walks through how conflicts happen, what the conflict markers mean, and how to find the commit that introduced a conflict.]]></summary>
        <content type="html"><![CDATA[Have you ever stared at a merge conflict and wondered what ours, theirs, and base are? These three versions are key to understanding how Git's three-way merge works. In this article, we'll break down each one and explain how merge conflicts happen. How Merge Conflicts Happen A merge conflict occurs when two branches modify the same part of a file in different ways. git can't decide which change to keep, so it stops and asks you to resolve it manually. For example, say Alice and Bob both branch off from the same commit. Alice changes the error message in auth.js to say "Invalid credentials". Bob changes the same line to say "Wrong username or password". When their branches are merged, git sees two different changes to the same line with no clear winner. That's a conflict. This is also why conflicts are less common than you'd expect. If Alice edits one section of auth.js and Bob edits another, git can often merge both changes cleanly with no conflict. Simple Merge Conflict Example Let's look at a concrete example. This repository has two files, file1.txt and file2.txt. The master branch has 3 commits and the feature branch has 2 commits. Both branches started from the same initial commit, where both files had the same content: From there, each branch diverged on the same files and lines. On master, both files were updated across two separate commits: On feature, also across two separate commits, they were updated to: Now we have two conflicting versions of the same lines. This isn't a decision git can make automatically, so if you try to merge, you'll get a merge conflict. Merge Conflict Markers When we try to merge feature into master, git sees that both branches modified the same lines from the same starting point and stops: git couldn't decide which version to keep, so it paused and marked the conflicting sections directly inside the files. Opening file1.txt (and similarly file2.txt) reveals this: By default, git only shows two versions: what your current branch has, and what the incoming branch has. The base (the version both branches started from) is used internally to detect the conflict, but git doesn't show it unless you configure it to. To expose the base, enable the diff3 conflict style: Now re-triggering the same merge produces: The middle section, between ||||||| and =======, is the base: the file as it existed in the common ancestor commit c256ee4, before either branch touched it. Tip: If you're on Git 2.35 or later, consider zdiff3 instead of diff3. It's a stricter variant that eliminates some false conflicts by being smarter about what it considers a genuine change, with no other downsides. This three-section format is where the terms ours, theirs, and base come from. They didn't originate in Git; they come from diff3, a Unix utility dating back to 1979 that compares three versions of a file: the original and two diverging copies. Git's conflict markers are essentially diff3 output rendered inline, and the vocabulary traveled with the algorithm into every tool built on top of it: IDEs, merge tools like vimdiff or kdiff3, and Git's own documentation. Understanding the Three Versions: Ours, Theirs, and Base Base is the common ancestor: the last commit both branches share before they diverged. In our example, that's the initial commit (c256ee4), where both files contained hello world. It's the neutral starting point that neither branch exclusively "owns". Ours is the version on the branch receiving the merge, the one you're currently on. In our case, we ran git merge feature while on master, so ours is the master version: hello from master. It's worth being explicit here: ours has nothing to do with who wrote the code or who "owns" it in any meaningful sense. Git has no concept of ownership. It's purely about which branch is receiving the changes. Theirs is the version coming from the branch being merged in. Here that's feature, so theirs is hello from feature branch. Looking back at the conflict marker, it maps directly: One thing worth noting: ours and theirs flip depending on how you ran the merge. If you had switched to feature and run git merge master instead, feature would be ours and master would be theirs. The base stays the same either way, since the common ancestor doesn't change. This also means the labels can surprise you during a git rebase. When you rebase your local branch onto another, Git replays your commits one by one, and in that process the incoming branch becomes ours and your own commits become theirs. It's counterintuitive, but it follows the same rule: ours is always whichever side is receiving the changes at that moment. Arguably, more intuitive names would be receiving and incoming, describing which branch is accepting the changes and which is providing them. But ours and theirs are deeply embedded in the history of version control tooling, so that's what stuck. Resolving Conflicts Once you understand which version is which, resolution is straightforward. If you want to accept one side entirely without editing, Git provides a shortcut: For anything more nuanced, such as keeping part of each side or blending changes together, you'll need to edit the file manually or reach for a visual tool. Seeing all three versions side by side makes that much easier than reading inline markers. Instead of mentally reconstructing what changed and where, a 3-way diff tool lays it out visually: what your branch has, what the incoming branch has, and what the original looked like before either touched it. 3-Way Diff Merge If you'd like a purpose-built UI for this, Code Input's merge conflict tool shows all three versions at once and lets you apply diffs between any two of them directly from your browser, with no local setup required.]]></content>
        <author>
            <name>Abid Omar</name>
        </author>
        <category label="git"/>
        <category label="merge conflict"/>
        <category label="three-way merge"/>
    </entry>
    <entry>
        <title type="html"><![CDATA[Building an LSP Server with Rust is surprisingly easy and fun]]></title>
        <id>https://codeinput.com//blog/lsp-server</id>
        <link href="https://codeinput.com//blog/lsp-server"/>
        <updated>2026-03-19T00:00:00.000Z</updated>
        <summary type="html"><![CDATA[A hands-on guide to building toy LSP servers in Rust]]></summary>
        <content type="html"><![CDATA[In the past few weeks, I've been on the lookout for a solution to share code between multiple editors and platforms. I'm working on a CodeOwners platform, and part of the offering is various integrations with developers' own editors (like Visual Studio Code, neovim, Zed, etc.) and potentially LLM agents. Though I knew from the start that each editor would need its own integration, the pattern matching logic for CODEOWNERS rules stays the same across all of them; and it was important that this code produce consistent results whether it ran in Lua or Rust. So the challenge was twofold: how to keep this logic consistent across platforms and languages, and how to keep it in sync when making updates. One idea was to use WebAssembly to encapsulate the logic, ensuring the same code handles pattern matching everywhere. However, there was another challenge: speed. Since the CODEOWNERS CLI reads every file to find ownership data, this part can't be done in WASM and has to live in each editor's own extension, written in whatever language that editor supports. Rust makes this fast, but that's not always an option in other languages. (Theoretically you could use a C binding, but things were getting complicated faster than I liked.) While looking into extension support for editors like Helix that don't have a plugin system, I stumbled onto a completely different approach: LSP. What if I built an LSP server for CODEOWNERS rules? It seemed crazy at first; my intuition was that building LSP servers must be incredibly hard, an impression that came from setting up LSP servers for my Neovim config, which was a painful and buggy process. If installing one was that hard, how difficult could it be to build one? Turns out, not that much. How does an LSP server work LSP is a protocol. It defines a server that your editor communicates with. The simplest mental model for an LSP server is: it's a TCP server that receives JSON objects and answers with JSON objects. The spec standardizes what those JSON objects look like, what method names mean (textDocument/completion, textDocument/hover, etc.), what fields to expect, and what to send back. The editor speaks the protocol, your server listens and responds, and because the protocol is the same everywhere, any editor that implements LSP can talk to any server that implements LSP. Setting up a basic server with Rust For LSP with Rust there are several choices. The most popular one is tower-lsp, but unfortunately that project hasn't been updated in about three years. There isn't much activity around the alternatives either; LSP is kind of a niche thing. For this post, we'll use tower-lsp-server, a community fork of tower-lsp that's actively maintained. First, add the dependency: Next, define a struct that implements the LanguageServer trait. The trait only requires two methods — initialize and shutdown. Everything else is optional and has a default no-op implementation. In its most bare-bones form, your server does nothing at all: Notice there are no dependencies on Tokio or anything async-runtime-specific yet. This code would even compile to WASM which means you can even run it in the browser. Run cargo run and you'll see the full list of methods your service can route: LspService::new returns a tuple: the service itself and a ClientSocket which is essentially a tx/rx channel you'll use later to push messages from the server to the editor and vice-versa. Do you need a client to communicate with the service? No, you don't even need a running server, to understand the low level, here is an example where we send the initialize function to the service and manually unwrap it. you need to add tower-service, serde\_json and futures crates. Poking the server without an editor You don't even need an editor (or a server) to test the service. We can manually execute a JSON-RPC initialize request and inspect the response. Add tower-service, serde\_json, and futures to your dependencies and try this: Output: The server replied with an empty capabilities object, which makes sense since we declared none. This separation between protocol and server is really useful, and it's one of the things I like most about Rust and its ecosystem; you can already see how unit tests could be written without needing to emulate a server or editor. Connecting to a real editor Now let's wire up a real TCP server and connect it to Neovim. Add Tokio: The Client struct (injected by the framework in the closure) is how you interact with the editor. Here we use it to send a log message right after initialization. The server can push info to the editor at any time through this handle. With the server running, connect from Neovim with a single command: Run :LspInfo and you'll see it's alive: And there we have a real working LSP server. Endless possibilities So what can you actually do with this? Let's start with something silly to get a feel for it, then build up to more non-sense. Custom autocomplete Let's add a completion handler that triggers on % and suggests one item. We advertise the capability in initialize, then implement the completion method: Type % in your editor, and the autocomplete popup appears. label is what shows in the list, detail is the ghost text next to it, and insert\_text is what actually gets written into your file when you accept the suggestion. LSP Auto Complete EU Omniscient Chat Control The server can also modify the document in response to changes. Here's something that some people will love: a server that watches for a specific phrase and replaces it on the fly using apply\_edit: Now every time you type EU Commission sucks, your text disappears. You could even take it further like sending an API request to alert the relevant parties. Think about the endless possibilities! LSP Auto Complete Building an AI chatbot inside your editor How about something out right stupid (or maybe not?). Lines starting with ## and ending with a new line, trigger an API request to an OpenAI compatible endpoint and return the response in the new line! LSP Chat Bot The previous code was partly generated by LLMs, so it's more of a toy program than something I'd recommend using especially since requests to LLM endpoints cost real money. Why aren't LSP servers more popular? So why aren't LSP servers used more widely beyond programming languages? Honestly, I'm not sure. I'm still new to this area, so I can't fully assess whether LSP would make sense as an alternative to MCP. You could argue that LSP has certain limitations because it's built around a fixed set of methods, or that it was designed specifically for editors. Then again, maybe that was for the best with what's going with AI right now. Hey, if you made it this far, make sure to sign up for our newsletter to get future articles delivered to your inbox!]]></content>
        <author>
            <name>Abid Omar</name>
        </author>
        <category label="rust"/>
        <category label="lsp"/>
        <category label="programming"/>
    </entry>
    <entry>
        <title type="html"><![CDATA[Rust-like Error Handling in TypeScript]]></title>
        <id>https://codeinput.com//blog/typescript-result</id>
        <link href="https://codeinput.com//blog/typescript-result"/>
        <updated>2026-03-09T00:00:00.000Z</updated>
        <summary type="html"><![CDATA[An overview of how to replicate Rust Result type ergonomics in TypeScript using neverthrow, covering custom error types, and result chaining with safeTry.]]></summary>
        <content type="html"><![CDATA[TypeScript typing is genuinely hard. It's easy to end up in a codebase where half your types are any and unknown, which defeats the point. But that wasn't the most I miss from Rust. My real pet peeve was losing the ergonomics that come with Rust's Result type. Once you define your own error types in Rust, your functions end up looking like this: There are a lot of things I like about Rust in this function. At a glance, I can tell a1 and a2 are both i32. I can tell that sub\_operation\_1 and sub\_operation\_2 both return my custom Result type. And I know this function is extremely unlikely to panic or fail in some unpredictable way. It can fail, but when it does, it bubbles that responsibility up to the caller in a structured, traceable way. You know what failed and where. TypeScript doesn't give you any of that out of the box, so I was left looking for the next best option. Settling with neverthrow neverthrow is the closest TypeScript gets to Rust's Result ergonomics. It's not a 1-to-1 port, the API has its own shape and you'll need to spend some time getting familiar with it and that's fine, as TypeScript has different constraints anyway Custom Error/Result Types The first thing I wanted was my own custom error types. Having a centralized error type means I can do things like report errors to external services, show popups, and enforce consistent error handling across the app. By re-exporting Result with my own error type baked in, every new function is forced to handle errors in a way that fits my app's conventions. However, this is a bit more involved than Rust. You also need to re-export and wrap a handful of neverthrow types to make everything work with your custom error. Luckily, it's something that LLMs can help you do. Usage ends up looking something like this: Chaining issue with TypeScript In Rust, you can chain functions that return Result as long as your function returns the same Result type. The ? operator handles the rest. TypeScript has no equivalent, so you can't just write validateAge(29)?. The next best option is generator functions. In JavaScript, generator functions return an iterable object. The yield keyword lets you short-circuit execution: if an error is returned, it halts immediately and bubbles the error up. neverthrow takes advantage of this with a wrapper function called safeTry. It's not the most elegant ergonomics, but it gets the job done. Now, you can safely unwrap the result like this. Which is about as close to Rust's Result chaining as TypeScript can get. Effect Can the experience get better? Maybe. Effect is a TypeScript library that goes much further than neverthrow. Typed errors, recovery APIs, tracing, and a lot more. The tradeoff is a steep learning curve and it reshapes how you structure your entire codebase. For now, neverthrow is enough for me. I've learned to be skeptical when a framework bundles a lot of "free/batteries-included" stuff. There's usually a cost somewhere, and Effect being runtime evaluated is worth keeping in mind.]]></content>
        <author>
            <name>Abid Omar</name>
        </author>
        <category label="programming"/>
        <category label="type-safety"/>
        <category label="typescript"/>
    </entry>
    <entry>
        <title type="html"><![CDATA[Google Might Think Your Website Is Down ]]></title>
        <id>https://codeinput.com//blog/google-seo</id>
        <link href="https://codeinput.com//blog/google-seo"/>
        <updated>2026-02-13T00:00:00.000Z</updated>
        <summary type="html"><![CDATA[How Cross-page AI aggregation can introduce new liability vectors.]]></summary>
        <content type="html"><![CDATA[I recently added JSON-LD structured data to make my information easier for Google to scrape. Google AI answers now take most (all?) of the page real-estate, and users tend to treat these answers as authoritative truth. I have a pricing page on my site, so naturally I wanted to test whether my pricing table and FAQ were being picked up correctly. Code Input summarized Pricing details Great! Google was able to read my pricing table. It's not clear whether this information was synthesized from the HTML content, JSON-LD, or a combination of both. But it's correct. Hold on a second, there's a note out there that says: Note: As of early 2026, the website indicated it was offline. Wait. Come again? There's some stuff to unpack here. First, this website runs as a Cloudflare Worker. Cloudflare (despite some setbacks in recent months) is a highly available service with minimal downtime. Second, I'm not aware of Google having any special capability to detect whether websites are up or down. And even if my internal service went down, Google wouldn't be able to detect that since it's behind a login wall. Third, the phrasing says the website indicated rather than people indicated; though in the age of LLMs uncertainty, that distinction might not mean much anymore. Fourth, it clearly mentions the timeframe as early 2026. Since the website didn't exist before mid-2025, this actually suggests Google has relatively fresh information; although again, LLMs! Fortunately, Google still includes a link (for now) showing where this information came from. It's a concept that seems pretty foreign to many AI companies, so I'm not sure how long this will remain. Google AI - Sign Up Page Well, let's analyze this popup because it has confusion written all over it. First, both links shown here point to https\://codeinput.com, but then just below, it displays the Sign Up page title. This is interesting: why would a search for pricing include content specifically from the sign up page? Next, and fortunately, it shows the content that influenced the LLM's earlier decision. You might be wondering what this is about. Since I removed it, let me explain. I used to have a popup showing users the availability status of different services. The popup (a typical React component) was likely either crawled as static HTML (SSG'ed in the compilation process), or the status endpoint failed to return a proper response (if Google client-side rendered the page). Who knows? It's not like Google lets you see how it interprets your website. Google's inference apparently read this block and concluded that my entire website was down. How To Fix This? I have no idea. I wouldn't risk it, so I removed the status popup. Still, we don't know how exactly Google assembles the mix of pages it uses to generate LLM responses. Take the example above: the search was about Code Input's pricing page, but the AI assistant pulled content from both the pricing and signup pages. This is problematic because anything on your web pages might now influence unrelated answers. You could have outdated information on some forgotten page, or contradictory details across different sections. Google's AI might grab any of this and present it as the answer. If you allow user-generated content anywhere on your site (like forum posts or comments), someone could post fake support contact info, and Google might surface that to users searching for how to contact your company. Now scammers have a direct route to your customers.]]></content>
        <author>
            <name>Abid Omar</name>
        </author>
        <category label="seo"/>
        <category label="ai"/>
        <category label="web-development"/>
    </entry>
    <entry>
        <title type="html"><![CDATA[Understanding Merge Conflicts: Top GitHub Repositories Analysis]]></title>
        <id>https://codeinput.com//blog/conflict-dynamics</id>
        <link href="https://codeinput.com//blog/conflict-dynamics"/>
        <updated>2025-03-05T00:00:00.000Z</updated>
        <summary type="html"><![CDATA[An in-depth analysis of Git merge conflict patterns across different programming languages and projects, based on real-world data from GitHub repositories.]]></summary>
        <content type="html"><![CDATA[In this data-driven investigation, we analyzed over 250,000 Github pull requests from public GitHub repositories to understand the nature and patterns of code conflicts across different programming languages and project types. By examining real-world merge scenarios, we uncovered insights about which languages tend to experience more conflicts, what types of code changes most frequently lead to conflicts, and how project size and team dynamics influence conflict rates. Check back soon for the full article.]]></content>
        <author>
            <name>Abid Omar</name>
        </author>
        <category label="research"/>
        <category label="git"/>
        <category label="conflict"/>
        <category label="merge"/>
    </entry>
    <entry>
        <title type="html"><![CDATA[Code Input Enters Early Alpha Testing]]></title>
        <id>https://codeinput.com//blog/codeinput-alpha</id>
        <link href="https://codeinput.com//blog/codeinput-alpha"/>
        <updated>2025-03-01T00:00:00.000Z</updated>
        <summary type="html"><![CDATA[Code Input enters alpha, offering developers an intuitive three-way merge editor to streamline git conflict resolution]]></summary>
        <content type="html"><![CDATA[We're excited to announce that Code Input has officially entered its alpha phase. Code Input is a specialized git conflict merge resolution tool that features an intuitive three-way merge editor designed to make resolving git conflicts less painful and more efficient. After months of development and internal testing, we're now ready to share this tool with the developer community to gather valuable feedback and improve the experience before our full release. We invite developers of all experience levels to try our early alpha and provide feedback on their experience. Your feedback will help us refine the tool and ensure that it meets the needs of developers everywhere. To get started, click the Github button on the top right of this page to authenticate with your Github account. You can reach out to us by email at support\@codeinput.com with any questions or feedback. We look forward to hearing from you!]]></content>
        <author>
            <name>Abid Omar</name>
        </author>
        <category label="announcement"/>
    </entry>
</feed>