Notice
Recent Posts
Recent Comments
Link
일 | 월 | 화 | 수 | 목 | 금 | 토 |
---|---|---|---|---|---|---|
1 | 2 | 3 | 4 | 5 | 6 | 7 |
8 | 9 | 10 | 11 | 12 | 13 | 14 |
15 | 16 | 17 | 18 | 19 | 20 | 21 |
22 | 23 | 24 | 25 | 26 | 27 | 28 |
29 | 30 |
Tags
- gemma3
- qlora
- bf16
- 이상탐지
- 파인튜닝
- fp32
- 딥러닝
- 오차역전파
- multi-query
- fine tuning
- fp16
- 데이터 파싱
- pdf parsing
- rag parsing
- deep learning
- visual instruction tuning
- 손실함수
- LLaVA
- Non-Maximum Suppression
- leetcode
- 합성곱 신경망
- anomaly detection
- Mean squared error
- LLM
- 활성화 함수
- rrf
- Time Series
- Cross Entropy Error
- rag-fusion
- 활성화함수
Archives
- Today
- Total
목록2025/06/07 (1)
Attention, Please!!!
[논문 리뷰] Are Reasoning Models More Prone to Hallucinations?
Are Reasoning Models More Prone to Hallucination? Are Reasoning Models More Prone to Hallucination?Recently evolved large reasoning models (LRMs) show powerful performance in solving complex tasks with long chain-of-thought (CoT) reasoning capability. As these LRMs are mostly developed by post-training on formal reasoning tasks, whether they generalizearxiv.org 핵심 연구 질문 본 논문은 2025년 5월 29일에 아카이..
논문리뷰
2025. 6. 7. 22:15