Jump to Content
Tianjian Lu

Tianjian Lu

I received the B.E. degree from the National University of Singapore in 2010, and the M.S. and Ph.D. degrees from the University of Illinois at Urbana-Champaign in 2012 and 2016, respectively, all in electrical engineering. Since 2016, I have been a Hardware Engineer with Google working on Pixel Phones. My research interests include multiphysics modeling and simulation, signal and power integrity, and machine learning. I was the recipient of the Best Student Paper Award (The First Place Winner) at the 31st International Review of Progress in ACES, Williamsburg, VA, USA, in 2015, the Best Student Paper Award at the IEEE Electrical Design of Advanced Packaging and Systems, Honolulu, HI, USA, in 2016, and the P. D. Coleman Outstanding Research Award by the Department of Electrical and Computer Engineering, University of Illinois at Urbana-Champaign, in 2016.
Authored Publications
Google Publications
Other Publications
Sort By
  • Title
  • Title, desc
  • Year
  • Year, desc
    Nonuniform Fast Fourier Transform on TPUs
    Chao Ma
    Thibault Marin
    Yi-fan Chen
    Yue Zhuo
    IEEE International Symposium on Biomedical Imaging 2021 (2021) (to appear)
    Preview abstract In this work, we present a parallel algorithm for implementing the nonuniform Fast Fourier transform (NUFFT) on Google's Tensor Processing Units (TPUs). TPU is a hardware accelerator originally designed for deep learning applications. NUFFT is considered as the main computation bottleneck in magnetic resonance (MR) image reconstruction. The proposed implementation of NUFFT on TPUs is promising in accelerating MR image reconstruction and achieving clinically practical runtime. The computation of NUFFT consists of three operations: an apodization, an FFT, and an interpolation, all being formulated as tensor operations in order to fully utilize TPU's strength in matrix multiplications. The implementation is with TensorFlow. Numerical examples are provided to show a satisfying acceleration of NUFFT on TPUs. With a breakdown of the computation time, the interpolation operation is found as the most computationally expensive one among the three operations in NUFFT. The strong scaling analysis is used to demonstrate the high parallel efficiency of the implementation. View details
    Distributed Data Processing for Large-Scale Simulations on Cloud
    Lily Hu
    Yi-fan Chen
    2021 IEEE INTERNATIONAL SYMPOSIUM ON ELECTROMAGNETIC COMPATIBILITY, SIGNAL & POWER INTEGRITY (2021) (to appear)
    Preview abstract In this work, we proposed a distributed data pipeline for large-scale simulations by using libraries and frameworks available on Cloud services. The data pipeline is designed with careful considerations for the characteristics of the simulation data. The implementation of the data pipeline is with Apache Beam and Zarr. Beam is a unified, open-source programming model for building both batch- and streaming-data parallel-processing pipelines. By using Beam, one can simply focus on the logical composition of the data processing task and bypass the low-level details of distributed computing. The orchestration of distributed processing is fully managed by the runner, in this work, Dataflow on Google Cloud. Beam separates the programming layer from the runtime layer such that the proposed pipeline can be executed across various runners. The storage format of the output tensor of the data pipeline is Zarr. Zarr allows concurrent reading and writing, storage on a file system, and data compression before the storage. The performance of the data pipeline is analyzed with an example, of which the simulation data is obtained with an in-house developed computational fluid dynamic solver running in parallel on Tensor Processing Unit (TPU) clusters. The performance analysis demonstrates good storage and computational efficiency of the proposed data pipeline. View details
    Accelerating MRI Reconstruction on TPUs
    Chao Ma
    Thibault Marin
    Yi-fan Chen
    Yue Zhuo
    IEEE High Performance Extreme Computing Conference, IEEE (2020)
    Preview abstract The advanced magnetic resonance (MR) image reconstructions such as the compressed sensing and subspace-based imaging are considered as large-scale, iterative, optimization problems. Given the large number of reconstructions required by the practical clinical usage, the computation time of these advanced reconstruction methods is often unacceptable. In this work, we propose using Google's Tensor Processing Units (TPUs) to accelerate the MR image reconstruction. TPU is an application-specific integrated circuit (ASIC) for machine learning applications, which has recently been used to solve large-scale scientific computing problems. As proof-of-concept, we implement the alternating direction method of multipliers (ADMM) in TensorFlow to reconstruct images on TPUs. The reconstruction is based on multi-channel, sparsely sampled, and radial-trajectory $k$-space data with sparsity constraints. The forward and inverse non-uniform Fourier transform operations are formulated in terms of matrix multiplications as in the discrete Fourier transform. The sparsifying transform and its adjoint operations are formulated as convolutions. The data decomposition is applied to the measured $k$-space data such that the aforementioned tensor operations are localized within individual TPU cores. The data decomposition and the inter-core communication strategy are designed in accordance with the TPU interconnect network topology in order to minimize the communication time. The accuracy and the high parallel efficiency of the proposed TPU-based image reconstruction method are demonstrated through numerical examples. View details
    Deep Learning Models for Predicting Wildfires from Historical Remote-Sensing Data
    Fantine Huot
    R. Lily Hu
    Matthias Ihme
    John Burge
    Jason J. Hickey
    Yi-fan Chen
    John Roberts Anderson
    NeurIPS Artificial Intelligence for Humanitarian Assistance and Disaster Response Workshop (2020)
    Preview abstract Identifying regions that have high likelihood for wildfires is a key component of land and forestry management and disaster preparedness. We create a data set by aggregating nearly a decade of remote-sensing data and historical fire records to predict wildfires. This prediction problem is framed as three machine learning tasks. Results are compared and analyzed for four different deep learning models to estimate wildfire likelihood. The results demonstrate that deep learning models can successfully identify areas of high fire likelihood using aggregated data about vegetation, weather, and topography with an AUC of 83%. View details
    Multiphysics Modeling of Voice Coil Actuators with Recurrent Neural Network
    Ken Wu
    Michael Smedegaard
    IEEE JOURNAL ON MULTISCALE AND MULTIPHYSICS COMPUTATIONAL TECHNIQUES (2019)
    Preview abstract In order to accurately model the behaviors of a voice coil actuator (VCA), the three-dimensional (3-D) method is preferred over a lumped model. However, generating a 3-D model of a VCA system can be very computationally expensive. The computation efficiency is often limited by the spatial discretization, the multiphysics nature, and the nonlinearities of the VCA system. In order to enhance the computation efficiency, we propose incorporating the recurrent neural network (RNN) into the multiphysics simulation. In the proposed approach, the multiphysics problem is first solved with the finite element method (FEM) at full 3-D accuracy within a portion of the required time steps. A RNN is trained and validated with the obtained transient solutions. Once the training completes, the RNN can make predictions and generate results on the remaining portion of the required time steps. With the proposed approach, it avoids solving the multiphysics problem at all time steps and a significant reduction of computation time can thus be achieved. The training cost of the RNN model can be amortized when a longer duration of the transient solutions is required. A numerical example is used to demonstrate the improvement on the computation efficiency. Topologies of the neural network and the tunable parameters are investigated with the numerical example. View details
    Fast and Accurate Current Prediction in Packages Using Neural Networks
    Jian-Ming Jin
    Jin Y. Kim
    Ken Wu
    Yanan Liu
    EMC 2019 (2019)
    Preview abstract Electromigration (EM) has become one major reliability concern in modern integrated circuit packages. EM is caused by large currents flowing in metals and the mean time to failure (MTTF) is highly dependent on the maximum current value. We here propose a scheme for fast and accurate prediction of the maximum current on the ball grid arrays (BGAs) in a package given the pin current information of the die. The proposed scheme uses neural networks to learn the resistance network of the package and achieve the non-linear current mapping. The fast prediction tool can be used for analysis and design exploration of the pin assignment on the die level. View details
    High-Speed Channel Modeling with Deep Neural Network for Signal Integrity Analysis
    Ju Sun
    Ken Wu
    Zhiping Yang
    IEEE Conference on Electrical Performance of Electronic Packaging and Systems (EPEPS) - EPEPS 2017.
    Preview abstract In this work, deep neural networks (DNNs) are trained and used to model high-speed channels for signal integrity analysis. The DNN models predict eye-diagram metrics by taking advantage of the large amount of simulation results made available in a previous design or at an earlier design stage. The proposed DNN models characterize high-speed channels through extrapolation with saved coefficients, which requires no complex simulations and can be achieved in a highly efficient manner. It is demonstrated through numerical examples that the proposed DNN models achieve good accuracy in predicting eye-diagram metrics from input design parameters. In the DNN models, no assumptions are made on the distributions of and the interactions among individual design parameters. View details
    Modeling Capacitor Derating in Power Integrity Simulation
    Ken Wu
    Zhiping Yang
    EMC+SIPI 2017 in Washington DC (2017)
    Preview abstract In this work, we propose a simulation methodology that incorporates derating models of decoupling capacitors for power integrity analysis. The construction of the derating models of decoupling capacitors is based on the impedance measurement and curve fitting method. Three approaches of impedance measurement are compared and the most accurate one is selected to build the derating models. The curve fitting method converts the measured impedance into circuit models. A library file containing the derating models is generated such that it can be repeatedly used for different products at various design cycles. The derating library takes into account the operation conditions such as temperature and DC bias as well as the vendor information. The proposed simulation methodology with the derating library achieves high accuracy, which is demonstrated through correlations with measurements. View details
    No Results Found