<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>DSpace community: 管理學院</title>
    <link>http://140.116.207.99/handle/987654321/157</link>
    <description>無</description>
    <textInput>
      <title>The community's search engine</title>
      <description>Search the Channel</description>
      <name>s</name>
      <link>http://140.116.207.99/simple-search</link>
    </textInput>
    <item>
      <title>Nostalgic tourists acquiring resilience and emotional solidarity through self-congruity and functional congruity</title>
      <link>http://140.116.207.99/handle/987654321/324460</link>
      <description>title: Nostalgic tourists acquiring resilience and emotional solidarity through self-congruity and functional congruity abstract: From the perspective of stimulus-organism-response theory (S-O-R), this study explores whether self-congruity and functional congruity cause tourists to form positive nostalgic emotions toward heritage destinations and whether the nostalgic emotions elicited by tourists' experiences at heritage destinations promote their physical and mental health resilience, as well as their emotional solidarity with local residents. Data were collected from Daxi Old Street, Lukang Old Street, and Anping Old Street, located in northern, central, and southern Taiwan, respectively. A total of 959 respondents were collected. The research results reveal that self-congruity and functional congruity positively impact tourists' nostalgic emotions in heritage destinations. Product involvement does not moderate self-congruity's effect on nostalgia, but it does moderate functional congruity's impact. The study also supports the positive influence of nostalgic emotions on both tourist resilience and emotional solidarity in heritage tourism. These findings are crucial for governments and stakeholders planning and managing heritage tourist destinations.
&lt;br&gt;description: 113學年度第二學期升等參考著作
&lt;br&gt;</description>
      <pubDate>Mon, 29 Dec 2025 02:55:50 GMT</pubDate>
    </item>
    <item>
      <title>NTIRE 2024 Challenge on Image Super-Resolution (�4): Methods and Results</title>
      <link>http://140.116.207.99/handle/987654321/324007</link>
      <description>title: NTIRE 2024 Challenge on Image Super-Resolution (�4): Methods and Results abstract: This paper reviews the NTIRE 2024 challenge on image super-resolution (�4), highlighting the solutions proposed and the outcomes obtained. The challenge involves generating corresponding high-resolution (HR) images, magnified by a factor of four, from low-resolution (LR) inputs using prior information. The LR images originate from bicubic downsampling degradation. The aim of the challenge is to obtain designs/solutions with the most advanced SR performance, with no constraints on computational resources (e.g., model size and FLOPs) or training data. The track of this challenge assesses performance with the PSNR metric on the DIV2K testing dataset. The competition attracted 199 registrants, with 20 teams submitting valid entries. This collective endeavour not only pushes the boundaries of performance in single-image SR but also offers a comprehensive overview of current trends in this field. � 2024 IEEE.
&lt;br&gt;description: SCOPUS
&lt;br&gt;</description>
      <pubDate>Fri, 19 Dec 2025 06:52:17 GMT</pubDate>
    </item>
    <item>
      <title>DRCT: Saving Image Super-Resolution away from Information Bottleneck</title>
      <link>http://140.116.207.99/handle/987654321/324005</link>
      <description>title: DRCT: Saving Image Super-Resolution away from Information Bottleneck abstract: In recent years, Vision Transformer-based approaches for low-level vision tasks have achieved widespread success. Unlike CNN-based models, Transformers are more adept at capturing long-range dependencies, enabling the reconstruction of images utilizing non-local information. In the domain of super-resolution, Swin-transformer-based models have become mainstream due to their capability of global spatial information modeling and their shifting-window attention mechanism that facilitates the interchange of information between different windows. Many researchers have enhanced model performance by expanding the receptive fields or designing meticulous networks, yielding commendable results. However, we observed that it is a general phenomenon for the feature map intensity to be abruptly suppressed to small values towards the network's end. This implies an information bottleneck and a diminishment of spatial information, implicitly limiting the model's potential. To address this, we propose the Dense-residual-connected Transformer (DRCT), aimed at mitigating the loss of spatial information and stabilizing the information flow through dense-residual connections between layers, thereby unleashing the model's potential and saving the model away from information bottleneck. Experiment results indicate that our approach surpasses state-of-the-art methods on benchmark datasets and performs commendably at the NTIRE-2024 Image Super-Resolution (x4) Challenge. Our source code is available at https://github.com/ming053l/DRCT. � 2024 IEEE.
&lt;br&gt;description: SCOPUS
&lt;br&gt;</description>
      <pubDate>Fri, 19 Dec 2025 06:52:02 GMT</pubDate>
    </item>
    <item>
      <title>A Closer Look at Spatial-Slice Features Learning for COVID-19 Detection</title>
      <link>http://140.116.207.99/handle/987654321/324003</link>
      <description>title: A Closer Look at Spatial-Slice Features Learning for COVID-19 Detection abstract: Conventional Computed Tomography (CT) imaging recognition faces two significant challenges: (1) There is often considerable variability in the resolution and size of each CT scan, necessitating strict requirements for the input size and adaptability of models. (2) CT-scan contains large number of out-of-distribution (OOD) slices. The crucial features may only be present in specific spatial regions and slices of the entire CT scan. How can we effectively figure out where these are located? To deal with this, we introduce an enhanced Spatial-Slice Feature Learning (SSFL++) framework specifically designed for CT scan. It aims to filter out OOD data within the entire CT scan, enabling us to select crucial spatial slices for analysis by reducing 70% redundancy totally. Meanwhile, we proposed Kernel-Density-based slice Sampling (KDS) method to improve the stability during training and inference stage, therefore speeding up the rate of convergence and boosting performance. As a result, the experiments demonstrate the promising performance of our model using a simple EfficientNet-2D (E2D) model, even with only 1% of the training data. The efficacy of our approach has been validated on the COVID-19-CT-DB datasets provided by the DEF-AI-MIA workshop, in conjunction with CVPR 2024. Our code is available at https://github.com/ming053l/E2D. � 2024 IEEE.
&lt;br&gt;description: SCOPUS
&lt;br&gt;</description>
      <pubDate>Fri, 19 Dec 2025 06:51:54 GMT</pubDate>
    </item>
  </channel>
</rss>

