Departamento de Matemáticas UAM

  • Inicio
  • Inicio (2)
  • Presentación
  • Directorio
  • Organigrama
  • Intranet
  • Convocatoria de plazas

Estudios

  • Grado
  • Posgrado
  • Aula Abierta
  • Facultad de Ciencias

Investigación

  • Ayudas para investigación
  • Departamento
  • Grupos de investigación
  • Institutos de investigación
  • Seminarios
  • Joint Mathematics Colloquium ICMAT-UAM-UC3M-UCM
  • Memorial Rubio de Francia
  • Coloquio Premio Rubio de Francia
  • Coloquios Departamento

Divulgación

  • Semana de la Ciencia
  • Campamento de Verano
  • Matemáticas en La Corrala
  • Canal Youtube
  • Revista Qed
  • Otras Actividades
  • Blogs Divulgativos

Noticias Destacadas

       Agenda del Departamento

 

  • Información (provisional) sobre grupos y horarios de las asignaturas impartidas por el Departamento de Matemáticas, para el curso 2023-2024.

  • Propuestas de Trabajos de Fin de Grado para el curso 2023-2024.


 


Canal @matematicasuam

 

Enlace al canal del Departamento en youtube.

 


 


PIM (Pequeño Instituto de Matemáticas)

Con el objetivo de fomentar el interés por las matemáticas y dirigido a jóvenes entre 14 y 18 años, nace este proyecto de Instituto de Ciencias Matemáticas (ICMAT) en colaboración con nuestro Departamento, la Universidad Autónoma de Madrid y la Real Sociedad Matemática Española.

El proyecto comienzó en el curso académico 2022-2023. Ampliar información en su página web.

 

Mes anteior Día anterior
Anual
Mensual
Semanal
Diario
Buscar
Ir al mes específico
Día siguiente Mes siguiente
Anual Mensual Semanal Hoy Buscar Ir al mes específico
Machine learning in Madrid

Machine learning in Madrid

 
Lunes, 31 de mayo de 2021, 12-13h

Ponente: David Arroyo (CSIC)
Webpage: http://www.itefi.csic.es/es/personal/arroyo-guardeno-david
 
Título:  Trustworthy, Reliable and Engaging Scientific Communication Approaches (TRESCA): Trustworthy, Reliable and Engaging Scientific Communication Approaches (TRESCA):
 
Enlace:   https://conectaha.csic.es/b/mar-mrf-oj9-ui0
 
Abstract:
 
On one hand, decentralised systems that do not rely on the authority of a Trusted Third Parties posit the challenge of determining whether a piece of information is authentic. On the other hand, people consume more news and information coming from decentralised sources, such as social networks or messaging apps, than from centralised media such as newspapers or national television channels. Decentralisation and multiplication of types and sources ofinformation erode our ability to discern the accurate from inaccurate information. Traditionally information quality and reliability was established based on the credibility and the reputation of the source. On social media platforms and across messaging service apps, such as Whatsapp or Telegram, attribution cannot be properly established. As a result, the curation of news data along the entire data life cycle becomes a difficult task. Clearly categorising news on the continuum from unintentionally inaccurate to intentionally misleading information remains problematic. Poor identification of non-genuine information is a serious issue that prevents the effective containment of false information.
 
In this seminar we will explore the design implications for the construction of a misinformation widget guiding users in assessing the trustworthiness of various sources of information. A critical aspect in the design of the widget is the identification of the best news classification tools and methodologies. To achieve this objective, on option is to rely on fact checking platforms and human experts to obtain feedback, which can be extended by leveraging the so-called wisdom of crowds and perform news curation as result of a collaborative effort among users and experts. Expert-based systems are accurate but costly and not scalable, while crowds-based systems can be biased by herding behaviour. To overcome these limitations, we can ponder the developing of automatic detection techniques by means of Natural Language Processing (NLP) and more advanced Machine Learning (ML) techniques. Nonetheless, the selection of adequate models and datasets for their tuning and training is itself a challenge. Thus, we explore the option of adopting a so-called “human on the loop” approach, which integrates expert knowledge on fact checking and automatic detection of fake news and misinformation. Specifically, we propose a methodology that leverages fact-checking platforms to perform datasets labelling and the validation of the performance of NLP and ML tools for the automatic classification of information.
 
Localización Lunes, 31 de mayo de 2021, 12-13h
CSS Valid | XHTML Valid | Top | + | - | reset
Copyright © Eximium 2025 All rights reserved. Custom Design by Youjoomla.com
Inicio