Notice
Recent Posts
Recent Comments
Link
일 | 월 | 화 | 수 | 목 | 금 | 토 |
---|---|---|---|---|---|---|
1 | 2 | 3 | 4 | 5 | 6 | 7 |
8 | 9 | 10 | 11 | 12 | 13 | 14 |
15 | 16 | 17 | 18 | 19 | 20 | 21 |
22 | 23 | 24 | 25 | 26 | 27 | 28 |
29 | 30 |
Tags
- Cross Entropy Error
- Time Series
- Non-Maximum Suppression
- gemma3
- fp16
- rag-fusion
- 파인튜닝
- fp32
- rag parsing
- multi-query
- deep learning
- fine tuning
- 활성화 함수
- rrf
- leetcode
- 손실함수
- 딥러닝
- LLaVA
- qlora
- LLM
- bf16
- 활성화함수
- 이상탐지
- anomaly detection
- Mean squared error
- 합성곱 신경망
- 오차역전파
- 데이터 파싱
- pdf parsing
- visual instruction tuning
Archives
- Today
- Total
목록2025/06/07 (1)
Attention, Please!!!
[논문 리뷰] Are Reasoning Models More Prone to Hallucinations?
Are Reasoning Models More Prone to Hallucination? Are Reasoning Models More Prone to Hallucination?Recently evolved large reasoning models (LRMs) show powerful performance in solving complex tasks with long chain-of-thought (CoT) reasoning capability. As these LRMs are mostly developed by post-training on formal reasoning tasks, whether they generalizearxiv.org 핵심 연구 질문 본 논문은 2025년 5월 29일에 아카이..
논문리뷰
2025. 6. 7. 22:15