師資
張宏,加拿大工程院院士,IEEE Fellow, 廣東省“珠江人才計(jì)劃”領(lǐng)軍人才,深圳市“孔雀計(jì)劃”杰出人才,現(xiàn)任南方科技大學(xué)電子與電氣工程系講席教授, 《深圳市機(jī)器人視覺與導(dǎo)航重點(diǎn)實(shí)驗(yàn)室》主任 (https://rcvlab.eee.sustech.edu.cn)。曾就職加拿大阿爾伯塔大學(xué)計(jì)算機(jī)科學(xué)系多年,離任前為該系終身教授。在加拿大工作期間,完成了多項(xiàng)重大研發(fā)項(xiàng)目,擔(dān)任加拿大自然科學(xué)與工程基金委首席工業(yè)研究教授 (NSERC IRC)。目前研究方向?yàn)橐苿訖C(jī)器人導(dǎo)航,自動駕駛,計(jì)算機(jī)視覺,圖像處理。任多個國際期刊編委及大會主席, IEEE機(jī)器人與自動化協(xié)會旗艦會議IROS 編委會總主編 (2020-2022), 目前為IEEE機(jī)器人與自動化協(xié)會(RAS)行政委員會委員(2023-2025)。
教育經(jīng)歷
1982 美國 東北大學(xué) 電氣工程 學(xué)士 (BSc)
1986 美國 普渡大學(xué) 電氣工程 博士 (PhD)
1987 美國 賓夕法尼亞大學(xué) 計(jì)算機(jī)與信息科學(xué) 博士后 (PDF)
工作經(jīng)歷
2020.10 – 至 今 中國 南方科技大學(xué)電子與電氣工程系講席教授
2000.07 – 2020.10 加拿大 阿爾伯塔大學(xué)計(jì)算機(jī)科學(xué)系教授
2002.07 – 2003.04 新加坡 南洋理工大學(xué)高級訪問研究員
1994.07 – 2000.06 加拿大 阿爾伯塔大學(xué)計(jì)算機(jī)科學(xué)系副教授
1994.07 – 1995.06 日本 機(jī)械研究所訪問研究員
1988.01 – 1994.06 加拿大 阿爾伯塔大學(xué)計(jì)算機(jī)科學(xué)系助理教授
研究方向
移動機(jī)器人導(dǎo)航,視覺SLAM, 語義地圖構(gòu)建
具身智能,基于大模型的機(jī)器人導(dǎo)航與操作
計(jì)算機(jī)視覺,物體檢測,物體跟蹤,圖像分割
偏振相機(jī)應(yīng)用,HDR, 三維重構(gòu)
(GS: https://scholar.google.ca/citations?user=J7UkpAIAAAAJ)
所獲榮譽(yù)
2024 21st Conference on Robots and Vision (CRV) 最佳論文獎 in Robotics
2024 Asia-Pacific Artificial Intelligence Association (AAIA) Fellow
2024 廣東省電子學(xué)會一等獎
2019 IEEE ROBIO 2019 最佳論文獎
2018 IEEE/RSJ IROS 杰出服務(wù)獎
2015 加拿大工程院院士
2014 IEEE Fellow (會士)
2003-17 加拿大自然科學(xué)與工程基金委 (NSERC) 首席工業(yè)研究教授
2008 阿爾伯塔省科學(xué)技術(shù)獎
2006 加拿大信息處理與模式識別協(xié)會年獎
2004 加拿大華人教授協(xié)會Member of the Year
2002 阿爾伯塔大學(xué)理學(xué)院最佳教師獎
2000 IEEE千禧獎
近期文章(2022 - )
[1] S. Elkerdawy, M. Elhoushi, H. Zhang, and N. Ray, ‘Fire together wire together: A dynamic pruning approach with self-supervised mask prediction’, in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 12454–12463.
[2] X. Wang, H. Zhang, and G. Peng, ‘Evaluating and Optimizing Feature Combinations for Visual Loop Closure Detection’, Journal of Intelligent & Robotic Systems, vol. 104, no. 2, p. 31, 2022.
[3] I. Ali and H. Zhang, ‘Are we ready for robust and resilient slam? a framework for quantitative characterization of slam datasets’, in 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2022, pp. 2810–2816.
[4] M. Shakeri and H. Zhang, ‘Highlight specular reflection separation based on tensor low-rank and sparse decomposition using polarimetric cues’, arXiv preprint arXiv:2207. 03543, 2022.
[5] I. Ali and H. Zhang, ‘Optimizing SLAM Evaluation Footprint Through Dynamic Range Coverage Analysis of Datasets’, in 2023 Seventh IEEE International Conference on Robotic Computing (IRC), 2023, pp. 127–134.
[6] S. An et al., ‘Deep tri-training for semi-supervised image segmentation’, IEEE Robotics and Automation Letters, vol. 7, no. 4, pp. 10097–10104, 2022.
[7] B. Yang, J. Li, Z. Shao, and H. Zhang, ‘Robust UWB indoor localization for NLOS scenes via learning spatial-temporal features’, IEEE Sensors Journal, vol. 22, no. 8, pp. 7990–8000, 2022.
[8] G. Chen, L. He, Y. Guan, and H. Zhang, ‘Perspective phase angle model for polarimetric 3d reconstruction’, in European Conference on Computer Vision, 2022, pp. 398–414.
[9] H. Ye, J. Zhao, Y. Pan, W. Chen, and H. Zhang, ‘Following Closely: A Robust Monocular Person Following System for Mobile Robot’, arXiv preprint arXiv:2204. 10540, 2022.
[10] R. Zhou, L. He, H. Zhang, X. Lin, and Y. Guan, ‘Ndd: A 3d point cloud descriptor based on normal distribution for loop closure detection’, in 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2022, pp. 1328–1335.
[11] H. Ye, J. Zhao, Y. Pan, W. Cherr, L. He, and H. Zhang, ‘Robot person following under partial occlusion’, in 2023 IEEE International Conference on Robotics and Automation (ICRA), 2023, pp. 7591–7597.
[12] Y. Pan, L. He, Y. Guan, and H. Zhang, ‘An Experimental Study of Keypoint Descriptor Fusion’, in 2022 IEEE International Conference on Robotics and Biomimetics (ROBIO), 2022, pp. 699–704.
[13] B. Liu, Y. Fu, F. Lu, J. Cui, Y. Wu, and H. Zhang, ‘NPR: Nocturnal Place Recognition in Streets’, arXiv preprint arXiv:2304. 00276, 2023.
[14] H. Ye, W. Chen, J. Yu, L. He, Y. Guan, and H. Zhang, ‘Condition-invariant and compact visual place description by convolutional autoencoder’, Robotica, vol. 41, no. 6, pp. 1718–1732, 2023.
[15] C. Tang, D. Huang, L. Meng, W. Liu, and H. Zhang, ‘Task-oriented grasp prediction with visual-language inputs’, in 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2023, pp. 4881–4888.
[16] C. Tang, D. Huang, W. Ge, W. Liu, and H. Zhang, ‘Graspgpt: Leveraging semantic knowledge from a large language model for task-oriented grasping’, IEEE Robotics and Automation Letters, 2023.
[17] C. Tang, J. Yu, W. Chen, B. Xia, and H. Zhang, ‘Relationship oriented semantic scene understanding for daily manipulation tasks’, in 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2022, pp. 9926–9933.
[18] B. Yang, J. Li, Z. Shao, and H. Zhang, ‘Self-supervised deep location and ranging error correction for UWB localization’, IEEE Sensors Journal, vol. 23, no. 9, pp. 9549–9559, 2023.
[19] J. Ruan, L. He, Y. Guan, and H. Zhang, ‘Combining scene coordinate regression and absolute pose regression for visual relocalization’, in 2023 IEEE International Conference on Robotics and Automation (ICRA), 2023, pp. 11749–11755.
[20] X. Liu, S. Wen, and H. Zhang, ‘A real-time stereo visual-inertial SLAM system based on point-and-line features’, IEEE Transactions on Vehicular Technology, vol. 72, no. 5, pp. 5747–5758, 2023.
[21] K. Cai, W. Chen, C. Wang, H. Zhang, and M. Q.-H. Meng, ‘Curiosity-based robot navigation under uncertainty in crowded environments’, IEEE Robotics and Automation Letters, vol. 8, no. 2, pp. 800–807, 2022.
[22] X. Lin, J. Ruan, Y. Yang, L. He, Y. Guan, and H. Zhang, ‘Robust data association against detection deficiency for semantic SLAM’, IEEE Transactions on Automation Science and Engineering, vol. 21, no. 1, pp. 868–880, 2023.
[23] W. Chen et al., ‘Keyframe Selection with Information Occupancy Grid Model for Long-term Data Association’, in 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2022, pp. 2786–2793.
[24] W. Yang, Y. Zhuang, D. Luo, W. Wang, and H. Zhang, ‘VI-HSO: Hybrid Sparse Monocular Visual-Inertial Odometry’, IEEE Robotics and Automation Letters, 2023.
[25] L. He and H. Zhang, ‘Large-scale graph sinkhorn distance approximation for resource-constrained devices’, IEEE Transactions on Consumer Electronics, 2023.
[26] W. Chen et al., ‘Cloud Learning-based Meets Edge Model-based: Robots Don’t Need to Build All the Submaps dItself’, IEEE Transactions on Vehicular Technology, 2023.
[27] W. Chen, C. Fu, and H. Zhang, ‘Rumination meets vslam: You don’t need to build all the submaps in realtime’, Authorea Preprints, 2023.
[28] Z. Tang, H. Ye, and H. Zhang, ‘Multi-Scale Point Octree Encoding Network for Point Cloud Based Place Recognition’, in 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2023, pp. 9191–9197.
[29] L. He, W. Li, Y. Guan, and H. Zhang, ‘IGICP: Intensity and geometry enhanced LiDAR odometry’, IEEE Transactions on Intelligent Vehicles, 2023.
[30] L. He and H. Zhang, ‘Doubly stochastic distance clustering’, IEEE Transactions on Circuits and Systems for Video Technology, vol. 33, no. 11, pp. 6721–6732, 2023.
[31] S. Wen, P. Li, and H. Zhang, ‘Hybrid Cross-Transformer-KPConv for Point Cloud Segmentation’, IEEE Signal Processing Letters, 2023.
[32] J. Li et al., ‘Deep learning based defect detection algorithm for solar panels’, in 2023 WRC Symposium on Advanced Robotics and Automation (WRC SARA), 2023, pp. 438–443.
[33] B. Liu et al., ‘NocPlace: Nocturnal Visual Place Recognition Using Generative and Inherited Knowledge Transfer’, arXiv preprint arXiv:2402. 17159, 2024.
[34] S. Wen, X. Liu, Z. Wang, H. Zhang, Z. Zhang, and W. Tian, ‘An improved multi-object classification algorithm for visual SLAM under dynamic environment’, Intelligent Service Robotics, vol. 15, no. 1, pp. 39–55, 2022.
[35] W. Ge, C. Tang, and H. Zhang, ‘Commonsense Scene Graph-based Target Localization for Object Search’, arXiv preprint arXiv:2404. 00343, 2024.
[36] J. Zhao, H. Ye, Y. Zhan, and H. Zhang, ‘Human Orientation Estimation under Partial Observation’, arXiv preprint arXiv:2404. 14139, 2024.
[37] H. Tao, B. Liu, J. Cui, and H. Zhang, ‘A convolutional-transformer network for crack segmentation with boundary awareness’, in 2023 IEEE International Conference on Image Processing (ICIP), 2023, pp. 86–90.
[38] J. Yin, Y. Zhuang, F. Yan, Y.-J. Liu, and H. Zhang, ‘A Tightly-Coupled and Keyframe-Based Visual-Inertial-Lidar Odometry System for UGVs With Adaptive Sensor Reliability Evaluation’, IEEE Transactions on Systems, Man, and Cybernetics: Systems, 2024.
[39] I. Ali, B. Wan, and H. Zhang, ‘Prediction of SLAM ATE using an ensemble learning regression model and 1-D global pooling of data characterization’, arXiv preprint arXiv:2303. 00616, 2023.
[40] S. An et al., ‘An Open-Source Robotic Chinese Chess Player’, in 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2023, pp. 6238–6245.
[41] X. Liu, S. Wen, J. Zhao, T. Z. Qiu, and H. Zhang, ‘Edge-Assisted Multi-Robot Visual-Inertial SLAM With Efficient Communication’, IEEE Transactions on Automation Science and Engineering, 2024.
[42] S. Ji et al., ‘A Point-to-distribution Degeneracy Detection Factor for LiDAR SLAM using Local Geometric Models’, in 2024 IEEE International Conference on Robotics and Automation (ICRA), 2024, pp. 12283–12289.
[43] D. Huang, C. Tang, and H. Zhang, ‘Efficient Object Rearrangement via Multi-view Fusion’, in 2024 IEEE International Conference on Robotics and Automation (ICRA), 2024, pp. 18193–18199.
[44] H. Ye, J. Zhao, Y. Zhan, W. Chen, L. He, and H. Zhang, ‘Person re-identification for robot person following with online continual learning’, IEEE Robotics and Automation Letters, 2024.
[45] W. Chen et al., ‘Cloud-edge Collaborative Submap-based VSLAM using Implicit Representation Transmission’, IEEE Transactions on Vehicular Technology, 2024.
[46] G. Zeng, B. Zeng, Q. Wei, H. Hu, and H. Zhang, ‘Visual Object Tracking with Mutual Affinity Aligned to Human Intuition’, IEEE Transactions on Multimedia, 2024.