{"created":"2023-06-19T11:45:02.884788+00:00","id":15252,"links":{},"metadata":{"_buckets":{"deposit":"d4505f58-f52b-463e-b686-d20994229b0d"},"_deposit":{"created_by":15,"id":"15252","owners":[15],"pid":{"revision_id":0,"type":"depid","value":"15252"},"status":"published"},"_oai":{"id":"oai:mie-u.repo.nii.ac.jp:00015252","sets":["143:602:1692661661745"]},"author_link":[],"item_8_biblio_info_6":{"attribute_name":"書誌情報","attribute_value_mlt":[{"bibliographicIssueDates":{"bibliographicIssueDate":"2021-05-13","bibliographicIssueDateType":"Issued"}}]},"item_8_description_14":{"attribute_name":"フォーマット","attribute_value_mlt":[{"subitem_description":"application/pdf","subitem_description_type":"Other"}]},"item_8_description_4":{"attribute_name":"抄録","attribute_value_mlt":[{"subitem_description":"本研究では、階層型ニューラルネットやスパース学習に共通するモデル選択の問題を扱う。特に、ここでは、正則化・縮小推定の下でのモデル選択を考えた。まず、スパース学習において、正則化法であるLASSOのバイアス問題を解決するスケーリング法を考え、その下でのモデル選択規準を導出し、応用上の妥当性を数値的に確認した。さらに、スケーリング法を利用して、ノンパラメトリック直交回帰の下での統一的なモデリング法を与えるとともに、その汎化性を理論的に解析した。一方で、階層型ニューラルネットについては、モデル選択に関係して、その深層化による学習の傾向とオーバーフィッテイングの関係を調べた。","subitem_description_type":"Abstract"},{"subitem_description":"In this project, we considered a model selection problem that is common for both of layered neural nets and sparse modeling. We considered model selection under regularization and shrinkage methods. In a sparse modeling, we derived a scaling method for LASSO, in which a bias problem is relaxed. And, we derived a risk-based model selection criterion for the estimate under the proposed method. We confirm its effectiveness through numerical experiments. Additionally, by introducing a scaling method, we derived a unified modeling method under a non-parametric orthogonal regression problem and we analyzed the generalization properties of the proposed method. On the other hand, in layered neural nets, we found that a deep structure affects over-fitting to noise.","subitem_description_type":"Abstract"}]},"item_8_description_5":{"attribute_name":"内容記述","attribute_value_mlt":[{"subitem_description":"2018年度~2020年度科学研究費補助金(基盤研究(C))研究成果報告書","subitem_description_type":"Other"}]},"item_8_description_64":{"attribute_name":"科研費番号","attribute_value_mlt":[{"subitem_description":"18K11433","subitem_description_type":"Other"}]},"item_8_publisher_30":{"attribute_name":"出版者","attribute_value_mlt":[{"subitem_publisher":"三重大学"}]},"item_8_text_31":{"attribute_name":"出版者(ヨミ)","attribute_value_mlt":[{"subitem_text_value":"ミエダイガク"}]},"item_8_text_65":{"attribute_name":"資源タイプ(三重大)","attribute_value_mlt":[{"subitem_text_value":"Kaken / 科研費報告書"}]},"item_8_version_type_15":{"attribute_name":"著者版フラグ","attribute_value_mlt":[{"subitem_version_resource":"http://purl.org/coar/version/c_970fb48d4fbd8a85","subitem_version_type":"VoR"}]},"item_creator":{"attribute_name":"著者","attribute_type":"creator","attribute_value_mlt":[{"creatorNames":[{"creatorName":"萩原, 克幸","creatorNameLang":"ja"},{"creatorName":"ハギワラ, カツユキ","creatorNameLang":"ja-Kana"},{"creatorName":"Hagiwara, Katsuyuki","creatorNameLang":"en"}]}]},"item_files":{"attribute_name":"ファイル情報","attribute_type":"file","attribute_value_mlt":[{"accessrole":"open_date","date":[{"dateType":"Available","dateValue":"2022-11-22"}],"displaytype":"detail","filename":"2022RP0011.pdf","filesize":[{"value":"86.9 kB"}],"format":"application/pdf","licensetype":"license_note","mimetype":"application/pdf","url":{"label":"2022RP0011","url":"https://mie-u.repo.nii.ac.jp/record/15252/files/2022RP0011.pdf"},"version_id":"79ffd540-d478-4e95-99ab-e5e35fe725a0"}]},"item_keyword":{"attribute_name":"キーワード","attribute_value_mlt":[{"subitem_subject":"スペース学習","subitem_subject_scheme":"Other"},{"subitem_subject":"階層型ニューラルネット","subitem_subject_scheme":"Other"},{"subitem_subject":"モデル選択","subitem_subject_scheme":"Other"},{"subitem_subject":"正則化","subitem_subject_scheme":"Other"},{"subitem_subject":"縮小推定","subitem_subject_scheme":"Other"}]},"item_language":{"attribute_name":"言語","attribute_value_mlt":[{"subitem_language":"jpn"}]},"item_resource_type":{"attribute_name":"資源タイプ","attribute_value_mlt":[{"resourcetype":"research report","resourceuri":"http://purl.org/coar/resource_type/c_18ws"}]},"item_title":"縮小推定を導入した貪欲法の下でのモデル選択規準についての研究","item_titles":{"attribute_name":"タイトル","attribute_value_mlt":[{"subitem_title":"縮小推定を導入した貪欲法の下でのモデル選択規準についての研究","subitem_title_language":"ja"},{"subitem_title":"On model selection criteria under shrinkage estimation in greedy learning ","subitem_title_language":"en"}]},"item_type_id":"8","owner":"15","path":["1692661661745"],"pubdate":{"attribute_name":"PubDate","attribute_value":"2022-11-22"},"publish_date":"2022-11-22","publish_status":"0","recid":"15252","relation_version_is_last":true,"title":["縮小推定を導入した貪欲法の下でのモデル選択規準についての研究"],"weko_creator_id":"15","weko_shared_id":-1},"updated":"2023-11-02T01:08:08.429616+00:00"}