“Sunwei”的版本间的差异
来自南京大学IIP
(更改可见性) |
(更改可见性) |
||
第30行: | 第30行: | ||
I am interested in '''Machine Learning and Multi-label Learning'''. | I am interested in '''Machine Learning and Multi-label Learning'''. | ||
− | *On '''Multi-label Text Classification (MLTC),''' text features can be regarded as '''detailed description of documents''' and label sets can be '''a summarization of documents'''. '''Hybrid Topics''' from text features and label sets by LDA (a method of '''topic model''') can effectively mine global label correlations and deeper features. The paper '''"An Efficient Framework for Multi-label Text Classification"''' | + | *On '''Multi-label Text Classification (MLTC),''' text features can be regarded as '''detailed description of documents''' and label sets can be '''a summarization of documents'''. '''Hybrid Topics''' from text features and label sets by LDA (a method of '''topic model''') can effectively mine global label correlations and deeper features. The paper '''"An Efficient Framework for Multi-label Text Classification"''' has been accepted in '''IJCNN 2019'''. |
− | *Deep learning For multi-label text classification. We utilize '''dilated convolution''' to obtain the '''semantic understanding''' of the text and design a hybrid '''attention mechansim''' for '''different labels''' (Specifically, each label should attend to most relevant textual | + | *Deep learning For multi-label text classification. We utilize '''dilated convolution''' to obtain the '''semantic understanding''' of the text and design a hybrid '''attention mechansim''' for '''different labels''' (Specifically, each label should attend to most relevant textual contents). Firstly, we initialize trainable label embeddings. Then After obationing word-level information based on Bi-LSTM, we get semantic understanding of texts based on word-level information by dilated convolution. Finally, we design a hybrid attention for different labels based on label embeddings. Besides, we add '''label cooccurrence matrix into loss function '''to guide the whole network to learn and achieve good results. The paper "'''Multi-label Classification: Select Distinct Semantic Understanding of Text for Different Labels'''" has been accepted in'''APWEB-WAIM 2019.''' |
+ | *'''GCN (Graph Convolution Network) '''can be used to exploit more complex label correlations on Image Multi-label Learning. | ||
<span style="font-size:larger"><span style="color:#3498db">'''Resources'''</span></span> | <span style="font-size:larger"><span style="color:#3498db">'''Resources'''</span></span> |
2019年5月7日 (二) 15:53的版本
M.Sc. Student @ IIP Group Email: weisun_@outlook.com | |
Supervisor
- Professor Jun-Yuan Xie
Biography
- I received my B.Sc. degree in of Soochow University in June 2017. In the same year, I was admitted to study for a Master degree in Nanjing University without entrance examination. Currently I am a second year M.Sc. student of Department of Computer Science and Technology in Nanjing University and a member of IIP Group, led by professor Jun-Yuan Xie and Chong-Jun Wang.
Research Interest
I am interested in Machine Learning and Multi-label Learning.
- On Multi-label Text Classification (MLTC), text features can be regarded as detailed description of documents and label sets can be a summarization of documents. Hybrid Topics from text features and label sets by LDA (a method of topic model) can effectively mine global label correlations and deeper features. The paper "An Efficient Framework for Multi-label Text Classification" has been accepted in IJCNN 2019.
- Deep learning For multi-label text classification. We utilize dilated convolution to obtain the semantic understanding of the text and design a hybrid attention mechansim for different labels (Specifically, each label should attend to most relevant textual contents). Firstly, we initialize trainable label embeddings. Then After obationing word-level information based on Bi-LSTM, we get semantic understanding of texts based on word-level information by dilated convolution. Finally, we design a hybrid attention for different labels based on label embeddings. Besides, we add label cooccurrence matrix into loss function to guide the whole network to learn and achieve good results. The paper "Multi-label Classification: Select Distinct Semantic Understanding of Text for Different Labels" has been accepted inAPWEB-WAIM 2019.
- GCN (Graph Convolution Network) can be used to exploit more complex label correlations on Image Multi-label Learning.
Resources
- Extreme Classification Repository: for large-scale multi-label datasets and off-the-shelf eXtreme Multi-Label Learning (XML) solvers.
- Mulan Multi-Label Learning Datasets: regular/traditional multi-label learning datasets.
- Related Works: This page categorizes a list of works of my interest, mainly in Multi-Label Learning.
Rewards or Honors
- Second-Class Academic Scholarship, 2018-2019
- First-Class Academic Scholarship, 2017-2018
- Outstanding Graduate Student, 2017.06
- CCF Excellent University Student, 2016.10
- National Scholarship, 2015.11