Endigest logo
Endigest
All Tech BlogsExplore TagsSend Feedback
Newsletter
Endigest logo
Endigest

© 2026 Endigest. All rights reserved.

  • About
  • Privacy
  • Terms
  • Contact
  • RSS

Backend Articles

Explore real-world engineering experiences from top tech companies.

필터 초기화
⌘K
AllFrontendBackendAI / MLML OpsDevOpsMobileArchitectureData EngSecurityProductCulture

Get the latest tech trends every morning

Receive daily AI-curated summaries of engineering articles from top tech companies worldwide.

  • 1
  • 2
  • More pages
  • 9
Netflix logoNetflix
21 min read
Backend•2026-04-10

Evaluating Netflix Show Synopses with LLM-as-a-Judge

Databricks logoDatabricks
11 min read
Backend•2026-04-10

Database Branching in Postgres: Git-Style Workflows with Databricks Lakebase

This post introduces database branching in Databricks Lakebase Postgres as a copy-on-write primitive that enables Git-like isolated database environments for development workflows.

Product
The Hacker News logoThe Hacker News
11 min read
Backend•2026-04-10

Google Rolls Out DBSC in Chrome 146 to Block Session Theft on Windows

Google has made Device Bound Session Credentials (DBSC) generally available to all Windows users of its Chrome web browser, months after it began testing the security feature in open beta. The public availability is currently limited to Windows users on Chrome 146, with macOS expansion planned in an upcoming Chrome release. "This project represents a significant

The Hacker News logoThe Hacker News
41 min read
Backend•2026-04-10

Backdoored Smart Slider 3 Pro Update Distributed via Compromised Nextend Servers

Unknown threat actors have hijacked the update system for the Smart Slider 3 Pro plugin for WordPress and Joomla to push a poisoned version containing a backdoor. The incident impacts Smart Slider 3 Pro version 3.5.1.35 for WordPress, per WordPress security company Patchstack. Smart Slider 3 is a popular WordPress slider plugin with more than 800,000 active installations across its free and Pro

The Hacker News logoThe Hacker News
31 min read
Backend•2026-04-09

UAT-10362 Targets Taiwanese NGOs with LucidRook Malware in Spear-Phishing Campaigns

A previously undocumented threat cluster dubbed UAT-10362 has been attributed to spear-phishing campaigns targeting Taiwanese non-governmental organizations (NGOs) and suspected universities to deploy a new Lua-based malware called LucidRook. "LucidRook is a sophisticated stager that embeds a Lua interpreter and Rust-compiled libraries within a dynamic-link library (DLL) to download and

The Hacker News logoThe Hacker News
01 min read
Backend•2026-04-09

The Hidden Security Risks of Shadow AI in Enterprises

As AI tools become more accessible, employees are adopting them without formal approval from IT and security teams. While these tools may boost productivity, automate tasks, or fill gaps in existing workflows, they also operate outside the visibility of security teams, bypassing controls and creating new blind spots in what is known as shadow AI. While similar to the phenomenon of

The Hacker News logoThe Hacker News
31 min read
Backend•2026-04-08

Shrinking the IAM Attack Surface through Identity Visibility and Intelligence Platforms (IVIP)

The Fragmented State of Modern Enterprise Identity Enterprise IAM is approaching a breaking point. As organizations scale, identity becomes increasingly fragmented across thousands of applications, decentralized teams, machine identities, and autonomous systems.  The result is Identity Dark Matter: identity activity that sits outside the visibility of centralized IAM and

Databricks logoDatabricks
01 min read
Backend•2026-04-08

How FSIs eliminate silos between clients, operations, and finance

IntroductionIn our earlier blog, Enabling Business Users on Databricks, we explored...

Platform
Solutions
The Hacker News logoThe Hacker News
31 min read
Backend•2026-04-08

Iran-Linked Hackers Disrupt U.S. Critical Infrastructure by Targeting Internet-Exposed PLCs

Iran-affiliated cyber actors are targeting internet-facing operational technology (OT) devices across critical infrastructures in the U.S., including programmable logic controllers (PLCs), cybersecurity and intelligence agencies warned Tuesday. "These attacks have led to diminished PLC functionality, manipulation of display data and, in some cases, operational disruption and financial

GitLab logoGitLab
215 min read
Backend•2026-04-07

Pipeline security lessons from March supply chain incidents

Note: The GitLab product did not use any of the compromised package versions mentioned in this post. In the span of 12 days, four separate supply chain attacks revealed that continuous integration and continuous delivery (CI/CD) pipelines have become a high-value target for sophisticated threat actors. Between March 19 and March 31, 2026, threat actors compromised: an open-source security scanner (Trivy) an infrastructure-as-code (IaC) security scanner (Checkmarx KICS) an AI model gateway (LiteLLM) a JavaScript HTTP client (axios) Each attack shared the same surface: the build pipeline. This article shows what happened, why pipelines can be uniquely vulnerable, and how centralized policy enforcement with GitLab — using policies defined below — can block, detect, and contain these classes of attack before they reach production. Trusted by millions, compromised in minutes Here is the timeline of the supply chain attacks: March 19: Trivy security scanner becomes an attack vector Trivy is one of the most widely used open-source vulnerability scanners in the world. It is the tool teams run inside their pipelines to find vulnerabilities. On March 19, a threat actor group known as TeamPCP used compromised credentials to force-push malicious code into 76 of 77 version tags of the aquasecurity/trivy-action GitHub Action and all 7 tags of aquasecurity/setup-trivy. Simultaneously, they published a trojanized Trivy binary (v0.69.4) to official distribution channels. The payload was credential-stealing malware that harvested environment variables, cloud tokens, SSH keys, and CI/CD secrets from every pipeline that ran a Trivy scan. The incident was assigned CVE-2026-33634 with a CVSS score of 9.4. The Cybersecurity and Infrastructure Security Agency (CISA) added it to the Known Exploited Vulnerabilities catalog within days. March 23: Checkmarx KICS falls next Using stolen credentials, TeamPCP pivoted to Checkmarx’s open-source KICS (Keeping Infrastructure as Code Secure) project. They compromised the ast-github-action and kics-github-action GitHub Actions, injecting the same credential-stealing malware. Between 12:58 and 16:50 UTC on March 23, any CI/CD pipeline referencing these actions was silently exfiltrating sensitive data, such as API keys, database passwords, cloud access tokens, SSH keys, and service account credentials. March 24: LiteLLM compromised via stolen Trivy credentials LiteLLM, an LLM API proxy with 95 million monthly downloads, was the next target. TeamPCP published backdoored versions (1.82.7 and 1.82.8) to PyPI using credentials harvested from LiteLLM’s own CI/CD pipeline, which used Trivy for scanning. The malware targeting Version 1.82.7 used a base64-encoded payload injected directly into litellm/proxy/proxy_server.py that executed at import time. The version targeting 1.82.8 used a .pth file, a Python mechanism that executes automatically during interpreter startup. Simply installing LiteLLM was enough to trigger the payload. Attackers encrypted the stolen data (SSH keys, cloud tokens, .env files, cryptocurrency wallets) and exfiltrated it to models.litellm.cloud, a lookalike domain. March 31: Source code for AI coding assistant leaked via simple packaging mistake While the TeamPCP campaign was still unfolding, a software company shipped an npm package containing a 59.8 MB source map file — one that referenced its AI coding assistant's complete, unminified TypeScript source code, hosted in the company's own Cloudflare R2 bucket. The leak exposed 1,900 TypeScript files, 512,000+ lines of code, 44 hidden feature flags, unreleased model codenames, and the full system prompt for anyone who knew where to look. As engineer Gabriel Anhaia explained, “A single misconfigured .npmignore or files field in package.json can expose everything.” March 31: axios and another trojan in the supply chain That same day, a sophisticated campaign targeted the axios npm package, a JavaScript HTTP client with over 100 million weekly downloads. A compromised maintainer account published backdoored versions (1.14.1 and 0.30.4). It injected a malicious dependency (plain-crypto-js@4.2.1) that deployed a Remote Access Trojan capable of running on macOS, Windows, and Linux. Both release branches were hit within 39 minutes, with the malware designed to self-destruct after execution. The patterns behind these attacks Across these five incidents, three distinct attack patterns emerge, and all of them exploit the implicit trust that CI/CD pipelines place in their inputs. Pattern 1: Poisoned tools and actions The TeamPCP campaign exploited a fundamental assumption: that the security tools running inside your pipeline are themselves trustworthy. When a GitHub Action tag or a PyPI package version resolves to malicious code, the pipeline executes it with full access to environment secrets, cloud credentials, and deployment tokens. There is no verification step because the pipeline trusts the tag. A recommended pipeline-level control: Pin tools and actions to immutable references (commit SHAs or image digests) rather than mutable version tags. Where pinning is not practical, verify the integrity of tools and dependencies against known-good checksums or signatures. Block execution if verification fails. Pattern 2: Packaging misconfigurations that leak IP A misconfigured build pipeline shipped debugging artifacts straight into the production package. A misconfigured .npmignore or files field in package.json is all it takes. A pre-publish validation step should catch this every time. A recommended pipeline-level control: Before any package is published, run automated checks that validate the package contents against an allowlist, flag unexpected files (source maps, internal configs, .env files), and block the publish step if the checks fail. Pattern 3: Vulnerabilities in transitive dependencies The axios attack targeted not just direct users of axios, but anyone whose dependency tree resolved to the compromised version. A single poisoned dependency in a lockfile can thus propagate through an entire organization’s build infrastructure. A recommended pipeline-level control: Compare dependency checksums against known-good lockfile state. Detect unexpected new dependencies or version changes. Block builds that introduce unverified packages. How GitLab Pipeline Execution Policies address each attack pattern GitLab Pipeline Execution Policies (PEPs) enable security and platform teams to inject mandatory CI/CD jobs into every pipeline across an organization, regardless of what a developer defines in their .gitlab-ci.yml. Jobs defined in PEPs cannot be skipped, even with [skip ci] or [no_pipeline] directives. Jobs can be executed in reserved stages (.pipeline-policy-pre and .pipeline-policy-post) that bookend the developer’s pipeline. We have published ready-to-use pipeline execution policies for all three patterns as an open-source project: Supply Chain Policies. These policies are independently deployable, and each one ships with violation samples that you can use to test them. Here is how each one works. Use case 1: Prevent accidental exposure in package publishing Problem: A source map file ended up in the npm package of an AI coding tool after the build pipeline skipped publish-time validation. PEP approach: We built an open-source Pipeline Execution Policy for exactly this class of error: Artifact Hygiene. The policy injects .pipeline-policy-pre jobs that auto-detect the artifact type (npm package, Docker image, or Helm chart) and inspect the contents before any publish step runs. For npm packages, it performs three checks: File pattern blocklist. Scans npm pack output for source maps (.map), test directories, build configs, IDE settings, and src/ directories. Package size gate. Blocks packages exceeding 50 MB, like the 59.8 MB package that leaked the AI tool. sourceMappingURL scan. Detects external URLs (the R2 bucket pattern that exposed a major AI company’s source), inline data: URIs, and local file references embedded in JavaScript bundles. When violations are found, the pipeline fails with a clear report in the failed CI job logs: ============================================= FAILED: 3 violation(s) found ============================================= BLOCKED: dist/index.js.map (matched: \.map$) BLOCKED: dist/index.js contains external sourceMappingURL BLOCKED: dist/utils.js contains inline sourceMappingURL This check is enforced by a Pipeline Execution Policy. If this is a false positive, contact the security team to update the policy project or exclude this project. The policy has no user-configurable CI variables. Developers cannot disable or bypass it. Exceptions are managed by the security team at the policy level, ensuring a deliberate process and a clean audit trail. The repository includes a test project with intentional violations (examples/leaky-npm-package/) so you can see the policy in action before deploying it to your organization. The README includes a complete quick-start guide for setup and deployment. What this catches: Any one of these controls would likely have prevented the AI company's source code leak: The source map file triggers the file pattern blocklist. Its 59.8 MB size triggers the size gate. The sourceMappingURL pointing to an external R2 bucket triggers the URL scan. Use case 2: Detect dependency tampering and lockfile manipulation Problem: The axios attack introduced a malicious transitive dependency (plain-crypto-js) that executed a RAT on install. Anyone who ran npm install during the compromise window pulled in the trojan. PEP approach: The Dependency Integrity policy injects .pipeline-policy-pre jobs that auto-detect the package ecosystem (npm or Python) and perform three checks: For npm projects (triggered by package-lock.json, yarn.lock, or pnpm-lock.yaml): Lockfile integrity. Runs npm ci --ignore-scripts, which fails if node_modules would differ from what the lockfile specifies. This catches cases where package.json was updated but the lockfile was not regenerated, and also verifies SRI integrity hashes. Blocked package scan. Cross-references the lockfile’s full dependency tree against blocked-packages.yml, a GitLab-maintained list of known-compromised package versions. The shipped blocklist includes axios@1.14.1, axios@0.30.4, and plain-crypto-js@4.2.1. Undeclared dependency detection. After install, compares the contents of node_modules against the lockfile. Any package present on disk but absent from the lockfile indicates tampering (e.g., a compromised postinstall script that fetches additional packages). For Python projects (triggered by requirements.txt, Pipfile.lock, poetry.lock, or uv.lock): Lockfile integrity. Installs in an isolated virtual environment and verifies that the install succeeds from the lockfile. Blocked package scan. Same blocklist approach. The shipped list includes litellm==1.82.7 and litellm==1.82.8. .pth file detection. Scans site-packages for .pth files containing executable code patterns (import os, exec(, eval(, __import__, subprocess, socket). This is the exact mechanism the LiteLLM backdoor used. When a violation is found: ============================================= FAILED: 1 violation(s) found ============================================= BLOCKED: axios@1.14.1 is a known-compromised package This check is enforced by a Pipeline Execution Policy. The policy runs in strict mode: any dependency not present in the committed lockfile blocks the pipeline. If a developer needs to add a dependency, they commit the updated lockfile. The policy verifies that the installed version matches the committed version. If something appears that was not committed (e.g., a transitive dependency injected via a compromised upstream package), the pipeline blocks. What this catches: The introduction of plain-crypto-js as a new, previously unseen dependency would be flagged by the undeclared dependency check. The axios@1.14.1 version would be caught by the blocked package scan. The LiteLLM .pth file would be caught by the .pth detection check. Each attack has at least one, and often two, independent detection signals. Use case 3: Detect and block compromised tools before execution Problem: TeamPCP replaced trusted Trivy and Checkmarx GitHub Action tags with malicious versions. Any pipeline referencing those tags executed credential-stealing malware. PEP approach: The Tool Integrity policy injects a .pipeline-policy-pre job that queries the GitLab CI Lint API (or falls back to evaluate the .gitlab-ci.yml), extracts the container image references, and compares it against an approved images allowlist maintained by the security team. The allowlist (approved-images.yml) supports three controls per image: Approved repositories: Only images from repositories on the list are permitted. An unknown repository blocks the pipeline. Allowed tags: Only specific tags are permitted within an approved repository. This prevents drift to untested versions. Blocked tags: Known-compromised versions can be explicitly blocked even if the repository is approved. The shipped allowlist blocks aquasec/trivy:0.69.4 through 0.69.6, the exact versions TeamPCP trojanized. When a violation is found, the pipeline fails before any other job runs: ============================================= FAILED: 1 violation(s) found ============================================= BLOCKED: aquasec/trivy:0.69.4 (job: trivy-scan) - tag '0.69.4' is known-compromised This check is enforced by a Pipeline Execution Policy. The allowlist is maintained via MRs against the policy project. To add a new approved image, the security team opens an MR. To respond to a new compromise, they add a blocked tag. No code changes required, just YAML. What this catches: When images with unapproved tags are detected, the policy compares the image repository names and tags to an allowlist. A failed match blocks the pipeline before any scanner executes, preventing credential exfiltration. Note: By extending the sample above, PEPs can be used to force pinning to digests over tags, which is immune to force pushes. This sample demonstrates a more basic tag-based enforcement pattern. Beyond PEPs: GitLab’s supply chain defenses Pipeline Execution Policies are the enforcement layer, but they work best as part of a broader defense-in-depth strategy. GitLab provides several capabilities that complement PEPs for supply chain protection: Secret detection GitLab secret detection prevents credentials from landing in the repository in the first place, significantly reducing what a compromised pipeline tool can harvest. In the context of the March 2026 attacks: Credentials stored in repositories are both easier for attackers to discover and slower to rotate. The Trivy incident showed that even the rotation process can be exploited: Aqua Security's rotation was not atomic, and the attacker captured newly issued tokens before the old ones were fully revoked. GitLab Secret Detection includes automatic revocation for leaked GitLab tokens and a partner API that notifies third-party providers to revoke their credentials, accelerating response when a breach does occur. Secret detection combined with proper secret management (short-lived tokens, vault-backed credentials, minimal pipeline secret exposure) limits what an attacker can reach even when a trusted tool turns hostile. Dependency scanning via software composition analysis (SCA) GitLab dependency scanning identifies known vulnerabilities in project dependencies by analyzing lockfiles and manifests. In the context of the March 2026 attacks: For LiteLLM, the compromised versions (1.82.7, 1.82.8) are tracked in GitLab's advisory database, flagging affected Python projects automatically. For axios, dependency scanning identifies the compromised versions (1.14.1, 0.30.4) across every project in the organization, giving security teams a single view for assessing blast radius and prioritizing credential rotation. Similarly, all npm packages compromised by TeamPCP's CanisterWorm propagation are also flagged if used. GitLab Container Scanning detects vulnerable container images used in your deployments. For the Trivy compromise, Container Scanning flags the trojanized Trivy Docker images (0.69.4 through 0.69.6) when they appear in your container registry or deployment manifests. Merge request approval policies Merge request approval policies can require security team approval before changes to dependency lockfiles or CI/CD configurations are merged. This ensures a human checkpoint for the types of changes that supply chain attacks typically introduce. Coming soon: Dependency Firewall, Artifact Registry, and SLSA Level 3 Attestation & Verification Upcoming GitLab supply chain security capabilities harden policy enforcement at two critical control points: the registry and the pipeline. The Dependency Firewall and Artifact Registry will block non-conforming packages, while SLSA Level 3 attestation will provide cryptographic proof that artifacts were produced by approved pipelines and remain unmodified. Together, they will give security teams verifiable control over what enters and exits the software supply chain. What this means for your organization Amidst rising AI-assisted threats, attacks on CI/CD pipelines are becoming commonplace. The TeamPCP campaign shows how a single compromised credential can cascade across an ecosystem of trusted tools. If your organization used any of the affected components, operate with the assumption that all of your pipeline secrets were exposed: rotate them immediately and audit systems for persisted backdoors. Either way, regularly rotating credentials and using short-lived tokens limits the blast radius of any future compromise. Here is what we recommend: Pin dependencies to checksums, when possible. Mutable version tags (like the ones TeamPCP hijacked) are not a security boundary. Use SHA-pinned references for all CI/CD components or actions and container images. Run pre-execution integrity checks. Use Pipeline Execution Policies to verify tool and dependency integrity before any pipeline job runs. This is the .pipeline-policy-pre stage. Audit what you publish. Every package publish step should include automated validation of the artifact contents. Source maps, environment files, and internal configuration should never leave your build environment. The Supply Chain Policy project provides a ready-to-deploy starting point for npm, Docker, and Helm artifacts. Detect dependency drift. Compare dependency resolutions against committed lockfiles on every pipeline run. Monitor for unexpected new dependencies. Centralize policy management. Do not rely on developers remembering to include security checks. Enforce them at the group or instance level through policies that developers cannot remove or skip. Assume your security tools are targets. If your vulnerability scanner, static application security testing (SAST) tool, or AI gateway can be compromised, it will be. Limit each tool to its least necessary privileges and verify that it can't reach anything else. Protect your pipelines with GitLab Over two weeks, attackers compromised production pipelines at organizations running some of the most widely adopted tools in the software development ecosystem. The lesson is clear: Build pipelines need the same degree of centralized, policy-driven protection that we apply to networks and cloud infrastructure. GitLab Pipeline Execution Policies provide that enforcement layer. They ensure that security checks run on every pipeline, in every project regardless of individual project configurations. Combined with dependency scanning, secret detection, and merge request approval policies, they can block, detect, and contain the class of attacks we saw in March 2026. The Supply Chain Policies project provides a working Pipeline Execution Policy that catches the exact class of error behind the major AI company’s leak, with coverage for npm packages, Docker images, and Helm charts. Clone it, deploy it to your group, and ensure that all of your pipelines are ready for the supply chain attacks to come. To get started with centralized pipeline policies, sign up for a free trial of GitLab Ultimate. This blog post contains "forward-looking statements" within the meaning of Section 27A of the Securities Act of 1933, as amended, and Section 21E of the Securities Exchange Act of 1934. Although we believe that the expectations reflected in these statements are reasonable, they are subject to known and unknown risks, uncertainties, assumptions and other factors that may cause actual results or outcomes to differ materially. Further information on these risks and other factors is included under the caption "Risk Factors" in our filings with the SEC. We do not undertake any obligation to update or revise these statements after the date of this blog post, except as required by law.

Spring logoSpring
11 min read
Backend•2026-04-07

This Week in Spring - April 7th, 2026

This is a weekly roundup of Spring ecosystem news and community highlights for April 7th, 2026.

Netflix logoNetflix
31 min read
Backend•2026-04-02

Smarter Live Streaming at Scale: Rolling Out VBR for All Netflix Live Events

Netflix explains their January 26, 2026 transition from CBR to VBR encoding for all Live events, detailing the engineering challenges and solutions involved.

encoding
content-delivery-platform
adaptive-bitrate
live-streaming

Trending This Week

#1
GitHub logoGitHub

Agent-driven development in Copilot Applied Science

13 views2026-03-31
#2
AWS logoAWS

Announcing managed daemon support for Amazon ECS Managed Instances

12 views2026-04-01
#3
Google Cloud logoGoogle Cloud

Spanner's multi-model advantage for the era of agentic AI

10 views2026-03-31
#4
AWS logoAWS

AWS Weekly Roundup: AWS DevOps Agent & Security Agent GA, Product Lifecycle updates, and more (April 6, 2026)

9 views2026-04-06
#5
The Hacker News logoThe Hacker News

TrueConf Zero-Day Exploited in Attacks on Southeast Asian Government Networks

9 views2026-03-31
#6
Google Cloud logoGoogle Cloud

How AI-powered tools are driving the next wave of sustainable infrastructure and reporting

9 views2026-03-31