Distinguished Lecturer
Dr. Olena (Jianfang) Zhu

Distinguished Lecturer

About Dr. Olena (Jianfang) Zhu

Term 2026-2027

Head of AI Solutions & Ecosystem, Client Computing Group, Intel Corporation
Hillsboro, Oregon, USA

olena.j.zhu@intel.com

Dr. Olena Zhu is the Head of AI Solutions & Ecosystem in Intel’s Client Computing Group, where she leads Intel’s strategy for AI-enabled client and edge computing platforms. She drives the development of multi-agent, hybrid on-device/cloud AI solutions, including the Intel AI Assistant Builder, and builds strategic AI ecosystems with partners such as Microsoft, Mistral, and Perplexity. Her work focuses on transforming PCs and edge devices into agentic, intelligent platforms, accelerating practical AI adoption across consumer and enterprise environments through tightly integrated hardware–software–AI co-design.

Previously, Dr. Zhu served as Chief AI Technologist for Intel’s Client Computing Group, where she initiated and led Intel’s corporate-level Augmented Intelligence (AuI) strategy—combining domain expertise with machine intelligence to transform end-to-end design methodologies. She led the development of AI-driven optimization solutions spanning silicon, package, board, and system design, achieving over 90% reduction in design cycle time with corporate-wide adoption. Her technical contributions include AI-assisted auto-overclocking for Intel’s 14th Generation CPUs, personalized PC optimization for performance and power efficiency, and high-dimensional global optimization algorithms achieving 2× speedup over prior approaches. Earlier roles at Intel include Principal Engineer for the Intel Evo™ platform, System Architect, and Technical Lead, where she delivered major advances in battery life, power management, low-power architectures, high-speed I/O design, and ML-enabled nonlinear circuit simulation.

Dr. Zhu received her Ph.D. in Electrical and Computer Engineering from Purdue University and her B.S. in Electronic Engineering from the University of Science and Technology of China. She has authored 50+ journal and conference publications in computational electromagnetics, signal and power integrity, and AI-driven system optimization, and holds 40+ granted or pending U.S. patents. Her work includes foundational contributions to full-wave electromagnetic solvers, nonlinear signaling analysis, and EMC/SI modeling, published in leading IEEE journals. She is a recipient of the Intel Achievement Award and the SWE Emerging Leader Award, has served as Industry Chair for an IEEE MTT-S conference, and is an active mentor and leader within the global engineering and research community.

Topics   & Abstracts

Topic-1 Fast, Accurate Signaling Analysis for Nonlinear High-Speed Channels: Eye Diagrams, Bit-Error Rates, and Jitter Effects
As high-speed interconnects continue scaling in data rate, channel complexity, packaging effects, and nonlinearities, designers face growing difficulty in accurately predicting signal integrity, worst-case eye diagrams, bit-error rates (BERs), and the impact of jitter. Traditional linear time-invariant (LTI) or exhaustive brute-force nonlinear simulation methods become prohibitively expensive or inaccurate for high channel memory and stringent BER targets. Relying on AI approaches only, such as Bayesian Optimization, also cannot deliver efficient and practical solution.

This presentation unifies three recent advances toward fast, accurate signaling analysis in nonlinear high-performance channels:

  1. Efficient Eye-Diagram Prediction via Low-Rank Approximations –a method that treats the set of responses of an m-bit memory nonlinear channel as a matrix whose columns correspond to the outputs under different preceding bit patterns was developed. By observing that for a reasonable prescribed accuracy the number of distinct waveforms (i.e. the effective rank, k) is much smaller than the full 2ᵐ possibilities, then a fast cross-approximation algorithm was developed, whose time/memory complexity is independent of 2ᵐ. This allows accurate identification of worst-case eye height and width with vastly fewer nonlinear simulations than brute force or Bayesian Optimization.
  2. Fast BER Analysis for Nonlinear Systems – Building on the low-rank framework, we further developed a method to compute Bit-Error-Rates (BERs) of nonlinear circuits with high channel memory (m large) without exhaustively enumerating 2ᵐ input patterns. Only about O(k) nonlinear simulations are needed, where k ≪ 2ᵐ, to approximate the probability density function (PDF) of the nonlinear received signal. The method also includes error quantification so that the prediction of BER (even extremely low, e.g. 10⁻⁵⁶) comes with confidence. Real-world large-scale examples show orders-of-magnitude speed-ups with maintained accuracy.
  3. Signaling Analysis Including Jitter Effects – Our most recent work extends the signaling analysis framework to include the effects of jitter in nonlinear high-performance channels. It provides a methodology for incorporating both deterministic and random jitter, quantifying how timing uncertainty interacts with channel nonlinearity and inter-symbol interference (ISI), and assessing its impact on eye-opening, worst-case eye distortions, and BER under more realistic conditions. This step is essential for bridging gap between idealized channel models and what is observed in real silicon/packaging/board environments.

These combined techniques enable practical, rigorous SI/PI/jitter analysis for next-generation high-rate I/Os (e.g. DDR, SerDes, DRAM, interposer links), where budgets are tight and nonlinearity + jitter can no longer be treated as small perturbations. Future work may extend in several directions: multi-channel interactions (more aggressors / crosstalk), adaptive equalization, dynamic jitter/noise sources, and real-time / in-design tools embedding these fast approximations.

Topic 2: Transforming On-Device AI: From Embedded to Agentic  — An IEEE Editorial Assistant Case Study
The IEEE editorial process is essential but time-consuming: submissions are confidential and cannot be sent to cloud AI systems; reviewer identification is labor-intensive; and manuscript compliance checks consume valuable effort. To address these challenges, we piloted a local, on-device AI editorial assistant as part of the Lenovo AI Agent Program, powered by Intel’s AI Assistant Builder. This proof of concept automates formatting checks, suggests reviewers, and synthesizes reviewer feedback — accelerating the review cycle while safeguarding sensitive research data.

This case study highlights the broader shift toward agentic AI: proactive, self-reasoning agents that plan, decide, and act autonomously across devices and cloud platforms. Intel’s AI Assistant Builder (SuperBuilder) 2.0, launched at IFA 2025, provides a reference design for multi-agent orchestration via the Model Context Protocol (MCP). This framework supports local or hybrid AI (local + cloud), seamless agent/server integration, and flexible deployments that balance privacy, performance, and cost.

Finally, I will discuss how collaborations with industry partners are shaping the future of on-device agent ecosystems, opening new opportunities for EMC researchers and practitioners not only in publishing workflows, but also in simulations, data analysis, and research collaboration where privacy, efficiency, and automation are critical.

Topic-3: Augmented Intelligence for End-to-End Design
Chiplet and disaggregated architectures are rapidly becoming mainstream across applications from edge to server. Yet the resulting design complexity exceeds the capabilities of today’s tools, flows, and methodologies—particularly when aiming for highly optimized solutions at scale.

Augmented Intelligence, the combination of human expertise and machine intelligence, offers a transformative approach to this challenge. By assigning strategic, high-level decision-making to engineers and delegating computationally intensive, iterative tasks to AI, this framework enables multi-level and multi-domain optimization. The result is the ability to generate a far greater number of custom-optimized designs with the same resources—delivering competitive products with higher quality and faster time-to-market.

At Intel, in collaboration with partners, we have developed and deployed Augmented Intelligence solutions spanning silicon to system design and hardware to software design. These efforts have demonstrated efficiency gains exceeding 90% in critical areas. In this talk, I will share practical examples and key insights from several years of applying Augmented Intelligence to end-to-end design, highlighting how human–AI collaboration is reshaping the path to innovation.

BACK TO CURRENT LECTURERS PAGE