The only SAFE-DSP partner with a fully self-developed foundation IP solution and platform-based design environment to delivere quick integration and optimal PPA.
Through consecutive onshoring policies and increasing tariffs from Western governments, international conglomerates and companies have looked into reorganizing their supply chains. This has accelerated in recent months with the tariff rollout of President Trump’s administration, as well as Biden’s call to ban Chinese software from the automotive market, are still ongoing. In an environment marked by higher uncertainty and volatility, what are the challenges and opportunities that the current international environment offers to Korean SMEs?
In every situation, there are both advantages and disadvantages.
When examining the current dynamics between the United States and China, we see some important implications. From our perspective, China has been rapidly catching up with the U.S. in many fields. Rather than attempting to halt this momentum outright, it appears the U.S. is trying to buy time by imposing restrictions, not only on software or finished products but also on the infrastructure and core technologies essential for manufacturing.
From our standpoint, tariffs themselves are not a primary concern. We believe their impact is likely to be greater on complete systems and downstream industries than on companies like ours that are more upstream in the value chain.
One of the most significant opportunities for us has been in the field of Bitcoin and LiteCoin mining. Historically, the coin mining industry, particularly in China, was highly vertically integrated. Companies managed everything from chip design and semiconductor manufacturing to hardware production, farm construction, and service operations within a single ecosystem. However, even before President Trump took office, sentiment in the U.S. began to shift against the use of Chinese-made equipment and systems capable of handling sensitive or valuable assets. That shift created a significant opening for companies like ours.
It is important to first understand the role a design service company plays in bridging the foundry and the end customer. In the standard model, the foundry develops its own semiconductor manufacturing process and provides customers with a design kit, which includes a foundation library, a set of standardized logic cells at the cell level, above the transistor level.
However, the Bitcoin mining sector functions quite differently. In this domain, foundries such as TSMC or Samsung Foundry typically do not provide a full design kit. Instead, they offer only a Process Design Kit (PDK), which contains basic process information but excludes the foundation library. Overseas customers are accustomed to this structure. They take the PDK, develop their own foundation libraries, construct their own design kits, and proceed independently. With years of experience, they possess the necessary infrastructure to handle this model.
On the other hand, many U.S.-based hedge funds and investors are increasingly seeking to support local fabless semiconductor startups. These companies usually have strong design capabilities but lack the technical know-how or resources to create their own foundation libraries from a PDK. That is precisely where we come in as we hold strength through Capella, which I would like to explain in more detail. Capella is a broad initiative, and one of its key strengths lies in our ability to develop foundation libraries, starting from the transistor level.
This unique capability has allowed us to enter both the Bitcoin and LiteCoin markets. For Litecoin, we are currently in discussions with overseas customers. For Bitcoin, we are already working with partners in the United States and Israel. As these markets continue to evolve, we are proud to be the only company currently capable of providing full support for Samsung Foundry customers in this niche.
At AD Technology, we have an internal department called the Infrastructure Team that specializes in supporting semiconductor development, down to the process and transistor levels. They bridge the gap between the foundry’s PDK and the customer’s final design. This capability allows us to collaborate effectively with clients in the coin mining chip sector.
Let me give you an example of problems we solve. For LiteCoin and DogeCoin, both foundation cell libraries and SRAM (Static random-access memory) are required, similar to conventional SoC (System-on-Chip) designs. However, Bitcoin chips are structurally different. They include an internal SHA (Secure Hash Algorithm) engine, which eliminates the need for SRAM, depending on how the architecture is designed.
At our company, the Infrastructure Team is divided into two groups: one that develops foundation libraries and another that focuses exclusively on SRAM. This structure allows us to meet the varying needs of different cryptocurrencies and their specific chip architectures.
SRAM is a type of static memory, often used as cache memory. In a typical logic SoC, we use different transistor-level configurations of SRAM, such as 2T, 3T, or 5T, depending on the application. In some cases, a memory compiler is used to generate combinations of SRAM blocks tailored to a specific design. There is no fixed one-size-fits-all solution.
Looking at other industries that are actively growing, such as AI chips or the automotive sector, where demand for SoC and complex systems is rising, does the industry need such foundation libraries services from DSP. And is Capella ready to support broader types of macros?
Capella is generally referred to as a “design flow,” which is a systemized process that allows design service companies like ours to transform RTL (Register Transfer Level) code or a customer’s concept into actual silicon using a variety of EDA (Electronic Design Automation) tools. This is the core mission of a design service provider.
In this context, one must understand the importance of PPA: power, performance, and area. To put it simply, the Capella design flow is capable of being applied across a wide variety of applications. Whether it’s AI or automotive, today’s industries are just as concerned with power efficiency as they are with raw computing power, particularly in AI. For example, there was a case in Ireland where a single data center consumed as much electricity as an entire city. It reached a point where the government had to consider building a dedicated power plant just to meet the demand.
From that standpoint, Capella was specifically developed to reduce unnecessary energy consumption while enhancing overall performance. It enables ultra-low-power, high-performance chip designs. For this reason, Capella is applicable not only to AI but also to automotive, networking, communications, and other general-purpose industries. Capella employs a wide range of EDA tools, and we also maintain an internal team that specializes in full-custom layout design. This team can create entirely new cells through full-custom methods, recharacterize existing cells for specific applications, or combine multiple cells to create new mega function cells.
Our design process is built around close collaboration with the customer. We work together to analyze their design and identify potential optimizations in terms of power, performance, and area, the PPA. This collaborative process lies at the heart of what Capella offers.
Among the applications we can cover, Bitcoin is a uniquely specialized case. One of the advanced techniques we use is NTV (Near Threshold Voltage). Normally, transistors switch on and off at a certain voltage, often close to 1 volt. But in NTV designs, we lower that voltage threshold significantly, sometimes to less than 0.2 volts. At these voltage levels, distinguishing between actual signal and electrical noise becomes highly challenging. It requires extremely advanced design methodologies.
So, within the Capella framework, building cell libraries for Bitcoin represents one of the most technically demanding areas we work in.
How low are you talking about?
In the case of the cell libraries we have developed for the Bitcoin market, where companies like Bitmain, MicroBT, and Bitdeer are major players, the threshold voltage is now typically around 0.18 or 0.17 volts. That is already extremely low.
This is what we mean by "Near Threshold Voltage." In fact, some companies are now exploring thresholds as low as 0.1 volts or even zero threshold. These ultra-low voltages are necessary to achieve the level of energy efficiency and performance that gives them a competitive advantage in such specialized applications.
In the automotive industry, architectures are evolving around increasingly complex chips and computing systems. Traditional Tier 1 suppliers are starting to realize the importance of in-house chip design, as specialized chip designers, moving downstream, are beginning to take a larger part of the pie. This shift is transforming the entire landscape. How do you view the current market transition, and how do you approach collaboration with both traditional and emerging players?
This trend reflects a classic transformation within the supply chain. Traditionally, there are the OEMs, followed by Tier 1 suppliers such as Continental and Bosch. Then there are component suppliers, chip providers, and material suppliers, especially those that support electronic components. For instance, if Qualcomm or NVIDIA develops a new chip, they typically collaborate first with Tier 1 suppliers, who possess detailed information about OEM requirements and excel in system integration to meet those specifications.
In that structure, foundries and design service companies like ours operate one layer further upstream. Our direct customers are typically Tier 1s or component suppliers serving the automotive industry. While OEMs certainly play a key role, the market is evolving quickly, especially with the rise of EV makers like Tesla and BYD.
In the EV space, traditional Tier 1 companies often lack deep expertise in semiconductors. Their core strengths have historically been system integration and maintaining relationships with OEMs, not chip design. Over the past few years, chipmakers like NVIDIA and Qualcomm have started working directly with OEMs like BMW, effectively bypassing Tier 1s. This puts pressure on Tier 1s, whose expertise lies more in software integration and interface development than in silicon design.
Meanwhile, chipmakers now offer not just silicon, but fully integrated systems. Their business model is changing, and each company is pursuing a different strategy. Some silicon providers, lead the market because they possess core IP and strong internal design capabilities, like for TCUs (Telematics Control Units). Others, like Infineon and NXP, supply simpler processors for applications like DCUs (Domain Control Units), which are mainly used for basic control functions like lighting systems.
This has effectively split the silicon provider market into two categories. While Tier 1 companies are still relevant today, their role is changing, and over time, this structure will continue to shift.
Last year, I visited Nio, which is well known for its E/E architecture for EVs. Nio already uses chips by leading chipmakers, but they’ve also developed their own R&D platform. Performance-wise, their internally developed chips outperform the leading chipmaker’s by nearly four times. This suggests a trend: automakers are now building their own silicon and sensors. Larger fabless may not need a design service provider, as they have the scale and R&D resources to work directly with foundries. But for companies without full in-house capabilities, partnering with design service firms like ours becomes essential. These companies have clear purposes and strong system-level knowledge but need external support to execute on silicon.
Today’s EVs require more NPU (Neural Processing Unit) functions to support AI, camera processing, and inference tasks. In fact, if you look at modern SoCs, 30 to 40 percent of the silicon is often dedicated to NPU IP. When a company develops its own NPU, it must also create a supporting software ecosystem to interface with OEM platforms. Even when the hardware is the same, each OEM demands a different type of interface and environment. For example, Company A may develop an automotive chip, but to serve different clients, they will need to provide different SDKs (Software Development Kits). This is where design service companies like us play a crucial role in SoC integration and late-stage development. It is almost like providing soft IP, or what we call a platform. A platform includes the processing unit, such as ARM or RISC-V cores, as well as peripheral IPs like memory interfaces, PCI, HDMI for displays, and more. To meet the needs of today's automotive industry, companies must be ready to deliver pre-validated, off-the-shelf technology platforms.
You mentioned hyperscale applications and automotive platforms. In our company, we offer a platform called ADP 500 Series. It is designed specifically for automotive applications and includes four to eight computing units based on IP from ARM. One critical requirement today is cybersecurity. There are new automotive regulations that cover not only silicon design but also the entire system interface. Our platform includes two major components: a high-performance processing unit and a dedicated security operation module.

ADP 500 architecture
We currently have customers in both Korea and the United States. In some cases, we work directly with OEMs; in others, we partner with fabless semiconductor companies. In China, for instance, who are prominent fabless players who work directly with OEMs. In Korea, we are somewhat behind these Chinese fabless companies. Korea has companies like Telechips and other smaller fabless firms that supply semiconductors to Hyundai and Kia. Some of them are also targeting global clients like BMW and Mercedes-Benz. In those cases, we offer our platform to support their business efforts.
Power efficiency is another crucial element. Our platform, in combination with our implementation capabilities and the Capella design flow, offers significant power advantages. In earlier nodes, we could promise 10 to 20 percent power reduction. But now, as process technologies advance to 5nm and 4nm, even a 2 to 3 percent improvement becomes highly meaningful, because the transistor density is so much higher.
As NVIDIA’s CEO often says, we are chasing millions of operations per second, but at that scale, even a 2 or 3 percent edge is huge. Power, performance, and area are always in a trade-off relationship. Reducing power usually requires compromising on performance or increasing the die area. Each factor must be carefully balanced, and our role is to provide optimized solutions in that delicate equation.
Do you consider this business model still evolving, or has it already become the norm?
From our perspective, this new business model is already becoming a standard. We're seeing increasing engagement and solid traction in the industry.
We occasionally receive inquiries from Tier 1 companies that are also trying to reposition themselves within this evolving ecosystem. In the past, Tier 1s mainly focused on integration, architecture, and maintaining long-term relationships with OEMs. Now, they’re attempting to offer their own solutions. However, building these solutions from scratch is extremely difficult due to the high investment required. We're not talking about millions of dollars—but hundreds of millions. Most Tier 1 companies simply cannot sustain that level of investment on their own.
As a result, we’re seeing more Tier 1s looking to collaborate with firms like ours to bridge the gap. This collaborative model, where chipmakers, design service providers, and system integrators all contribute their strengths, is rapidly becoming the norm in the EV and autonomous vehicle space.
In 2019, you made a major shift from being a TSMC partner to working with Samsung Foundry. Why did you believe that shifting from Taiwanese to Korean foundries was the right move for AD Technology?
There was certainly a lot of pressure when we announced our decision to switch from TSMC to Samsung Foundry. We had built a very solid and productive relationship with TSMC. However, as a listed company, we are under constant pressure to grow revenue and meet market demands. That reality naturally influences our business decisions.
In Korea, industry leadership is dominated by large corporations such as Samsung, LG, and SK Hynix. Even while we were working with TSMC, we had already been collaborating with Samsung Display, Samsung Mobile, and Samsung’s memory division, as well as with LG and SK Hynix. For instance, we were an early supplier of SSD controllers for SK Hynix. While they have since built up their own internal development teams, we once held more than 50% market share in Korea’s domestic semiconductor design services.
At our peak, our revenue reached 300 million dollars. But demand was growing even faster, and we needed to pursue larger opportunities. Around that time, TSMC introduced certain sales policies that limited our ability to expand within Korea. TSMC already had strong partners in other regions and wanted to control market dynamics more tightly in Korea. But we couldn’t continue to grow under those restrictions.
Samsung Foundry approached us with a business opportunity. It was a significant decision for us because we had deep expertise in process technology and ecosystem design. Samsung Foundry, at that time, was actively building out its ecosystem and needed system-level partners to support customers. We saw this not just as a challenge, but as an opportunity. That’s why we made the strategic shift to Samsung Foundry. After making the transition, we quickly established legal entities in both Europe and the United States. Even before that, we had two engineering campuses in Ho Chi Minh City, Vietnam, one focused on implementation engineering and another, located in central Ho Chi Minh, where higher salaries support advanced RTL and design work.
So, the decision to shift from TSMC to Samsung Foundry was not only about constraints, it was also about opportunity. We had a deep understanding of the process technologies of both foundries, and we recognized early on the advantages and disadvantages of each. Once we made the switch, our next question was: “How can we grow and win alongside Samsung Foundry?” As a design service company specializing in implementation, our value lies in providing complete “turnkey” solutions, delivering fully verified silicon to our customers. To do this, we must understand each component thoroughly, including analog IP, process IP, and interface IP.
We assessed the ecosystem of both TSMC and Samsung Foundry at the time and decided to create differentiation in two keyways. First, we focused on platform development. As SoCs become increasingly complex, customers need off-the-shelf platforms to accelerate design. Second, we prioritized IP ecosystem development, particularly in collaboration with IP suppliers. Major IP players like Synopsys and Cadence now dominate the space, having acquired many smaller companies. But they often focus only on high-margin IPs like USB, HDMI, or LPDDR. Developing new IP today requires massive investment, thousands of engineers and three to five years of effort. For a company of our size, that’s not sustainable. So instead, we focused on platforms, where the investment is primarily in software and soft IP.
We’ve had a strategic relationship with ARM for over 20 years. This has enabled us to prepare customized programs for strategic public relations with customers. That’s why we’re now actively working in fields like automotive, edge AI, and hyperscale computing.
One of the most impactful recent moves was our collaboration with Rebellions. That was a concrete example of meaningful cooperation, not just a conceptual partnership.
Another critical reason we shifted to Samsung Foundry is that, at the time, there were no companies supporting foundation libraries or memory compilers in the Samsung ecosystem. There were some in-house and outsourced teams, but none operated independently or commercially.
We saw this as a gap, and an opportunity. Synopsys and Cadence do not focus on foundation libraries or memory compilers as their core business, so we felt there was little conflict. We could step in without friction. Developing foundation libraries and memory compilers requires deep understanding, not only of design flow, but of transistor and process technology. This level of expertise allows us to provide real PPA optimization.
Even if the gain is only 2% to 5%, that can be significant. It’s not always about delivering a 10% or 20% improvement. Our customers are sophisticated. They understand transistors, and they know that a small advantage can lead to a meaningful impact. That was our reasoning, and the strategy, behind our transition to working with Samsung Foundry.
You mentioned that this decision was driven by three key differentiators: your platform, your collaboration with ARM, and your expertise in PPA optimization and memory compilers. Now that you’ve built these differentiators, what is your strategy for turning them into revenue?
Our most high-profile initiative today is the chiplet solution developed in collaboration with Samsung Foundry, ARM, and Rebellions. This involves not just hardware, but also software, training, modeling, silicon solutions, and GPU technologies like CUDA. Whether it's for personal computing, cloud infrastructure, or sovereign data centers, these applications require versatile and scalable platforms.

ADP 620 Collaboration
ARM has developed a new CPU architecture called Neoverse V3, which is significantly more powerful than previous generations. It is designed for AI data centers, automotive systems, and software-defined vehicles. These new vehicles will require centralized computing units, one massive processing core, with interfaces managed separately for sensors, cameras, and external data. Instead of multiple distributed ECUs, you will see centralized architectures connected via automotive Ethernet. ARM is already dominant in mobile computing and has surpassed competitors from the U.S., Japan, and Germany. Through architectural licenses, companies like Qualcomm, Apple, and even Samsung have developed their own customized ARM-based processors. ARM’s intellectual property forms the foundation of these products. Recently, ARM has been attempting to enter the laptop market. The challenge has been software support. Microsoft previously did not provide full support for ARM processors, but now they do, which has opened up new opportunities.
In terms of data centers, we are currently in the second generation of evolution. The first generation focused on data storage and basic cloud services. But then OpenAI changed everything, suddenly the industry shifted toward massive compute workloads, creating a new business model. This shift has exposed major inefficiencies in current data center architecture. Existing server processors, like Intel Xeon and AMD EPYC, offload AI acceleration through PCIe to external GPUs, but this architecture has poor utilization, often below 30%. The overhead is simply too high. ARM, by contrast, has a long-standing understanding of mass processing and has been preparing for this moment. That’s why they developed Neoverse. Interestingly, ARM usually licenses its IP and allows customers to compile and design their own CPUs. But in the case of Neoverse, the architecture is too complex for most fabless companies to handle. This has led ARM to offer a Computer Subsystem (CSS), which includes reference designs and the full instruction flow, a kind of turnkey manual for building high-performance systems. The CSS is currently based on TSMC processes, but it represents a strategic opportunity for companies like ours to participate in end-to-end design support for high-performance computing and edge AI markets.
We’re currently focused on hyperscale and edge AI, and our platform-based solutions are gaining strong traction, particularly in sovereign markets. For example, we’ve delivered a platform based on ARM’s Neoverse V3 CPU, with a configuration using 32 cores. That’s significant because ARM’s CSS (Compute Subsystem) for V3, known as Voyager, is originally built around a 64-core design on TSMC’s 3nm process. Reconfiguring that architecture for a 32-core setup, with different cache sizes and system configurations, requires deep understanding of CPU architecture, not just surface-level integration.
Before this V3 collaboration, we also worked on ARM’s Neoverse N2, which offers slightly lower performance but is well suited for DPUs and edge AI servers. In the hyperscale space, tech giants like Microsoft, Meta, and Google are already developing their own CPUs. We’re positioning ourselves to catch up with and support this trend through experience and technology development. Through these collaborations, we’re building a full solution package. If Rebellions expands or if other AI inference-focused fabless companies want to enter this market, they will need partners like us. While NVIDIA dominates due to CUDA, companies like Google are developing alternative inference processors. Inference is a very specific workload, and many fabless companies are emerging to address it. These companies can be our future partners for building complex compute subsystems.
ARM’s architecture now includes the CHI (Coherent Hub Interface) protocol in its Neoverse systems. Meanwhile, inference companies often build their systems based on ARM’s AXI protocol. These two interconnect protocols are fundamentally different. It’s like speaking Vietnamese and Korean, without a translator, communication breaks down.
Through our work with ARM and Rebellions, we’ve helped bridge this gap by developing interface modules known as AEK (ARM Ecosystem Kit). This gives us a strong position. We’re now capable of offering our experience and solutions to other customers as well.
If everything proceeds as planned, we expect to tape out and finalize the first chiplet-based solution by next year. This is a foundational achievement, particularly relevant for the automotive market, which is expressing strong interest. However, it's important to understand that in automotive, semiconductors are not yet a core industry. In contrast, in the data center space, semiconductors are the foundation.
Eventually, as standards mature and successful pilots are completed in data centers, we expect chiplet technologies to migrate into automotive applications. But automotive OEMs and Tier 1 suppliers each have their own ideas, architectures, and requirements. With multi-die systems, testability and quality become critical. If one die out of five fails, the entire chiplet stack is compromised.
That’s why we’re documenting every step of our collaboration, building a reusable knowledge base. Our goal is to leverage this experience with future customers, ensuring high utilization, energy efficiency, and quality. This matters even more in chiplet architectures, where interconnect fabrics consume substantial power.
Looking forwards, given that you have offices in various regions, where do you expect to see the most growth in terms of partnerships or revenue?
It’s a straightforward answer: the U.S. presents the most opportunity. There are numerous unicorn fabless companies emerging there, and we’ve already begun discussions about solutions in AI inference, network acceleration, and other specialized workloads. We’re also initiating partnerships in Israel, Germany, and France.
In the case of fabless companies that usually operate in a different ecosystem. How do you manage to open doors and get involved?
It’s definitely a challenge that requires persistence. Many fabless companies around the world are already aligned with TSMC. Even small players are looking to adopt cutting-edge nodes like 4nm or 2nm, but TSMC doesn’t have unlimited capacity.
That’s where Samsung Foundry has an opportunity. Their node naming starts with “S” (e.g., S4 for 4nm), while TSMC uses “N” (e.g., N5 for 5nm). In terms of power, performance, and area, TSMC N5 and Samsung S4 are roughly equivalent. Through our Capella methodology, we aim to make them truly comparable, 4 to 4, 5 to 5. Still, challenges remain, especially around memory quality and die size.
When we engage with new customers, we begin by signing an NDA and evaluating their design blocks, typically something like an NPU. Using Capella and our full-custom capabilities, we quickly assess their design for timing and compilation. We identify pain points, replace inefficient cells, and sometimes combine logic elements like flip-flops and control circuits into a single mega-function cell for optimized performance. This approach improves PPA. But technology alone isn’t enough, wafer pricing and delivery schedules matter just as much. It’s the combination of these factors that wins design awards and customer trust.
You already have 800 total number of employees. Are you planning to hire more, or are you currently satisfied with your engineering capacity?
So far, our strategy has been to build core competencies like Capella and platforms tailored to specific applications. We operate two campuses in Vietnam. One focuses on large-scale implementation and offers economic efficiency. The second campus, located in a central business district, offers salary levels similar to Korea and supports advanced RTL and design work.
Preparing Capella was no easy task, especially with overseas engineers. In the future, we might collaborate with teams in the Czech Republic or Eastern Europe. Internally, we have a subsidiary with over 30 years of engineering experience, which we acquired as part of our Samsung Foundry expansion strategy. Our Capella infrastructure team is solid. For the platform team, which focuses on architecture, we don’t need hundreds of engineers. Five to ten highly specialized architects are enough, as the work is modular. Once you understand ARM N2, transitioning to V3 is relatively straightforward.
Our platform team is subdivided by subsystem and application, which is a major strength. If we need more engineers, we will likely scale through our second Vietnam campus.
Globally, Canada is emerging as a hub for GPU and memory R&D, with talent pools from companies like AMD. The U.S. remains rich in engineering talent. Japan is strong in materials and fundamental engineering, not applications. Taiwan has a robust semiconductor ecosystem. In China, frequent job-hopping makes it difficult to build long-term loyalty.
Europe largely exited semiconductor development over a decade ago. But the recent U.S.-China conflict has prompted the EU to rethink its position. Much like NATO boosting defense spending, Europe now sees the need to invest in its own digital sovereignty. However, despite significant funding, accessing those funds is slow and bureaucratic, projects often take three or more years to launch. That makes survival difficult, especially compared to the fast-paced U.S. market.
In Korea, we’re currently satisfied with our engineering base. But if we need to expand, we will first look to Vietnam.
What last message would you like to share with our readers?
My hope is simple: if any company out there needs the solutions we offer, I want them to know that we’re ready. This isn’t just about promotion, it’s about precision. What we offer is a pinpoint solution, backed by deep experience and clear strategy.
Korea has several design service companies, but the term “design service” no longer accurately reflects our business model. Traditional design service is passive. We are active. We identify markets, prepare solutions in advance, engage with customers, and drive end-to-end execution. This proactive mindset is gaining ground in Korea as well. Globally, people are aware of Korea’s strengths in materials, devices, and system integration. Korean companies now export to the U.S., China, and beyond. But in areas like AI, fabless design, and platform-driven business models, Korea is still not seen as a dominant force. We want to change that perception. There is tremendous potential here, and many Korean companies are already preparing for the future.
For your more information, explore their website at http://adtek.co.kr
0 COMMENTS