{"id":30,"date":"2025-05-30T14:28:51","date_gmt":"2025-05-30T18:28:51","guid":{"rendered":"https:\/\/crimrapportannuel202425.wordpress.com\/?p=30"},"modified":"2025-06-06T13:49:24","modified_gmt":"2025-06-06T17:49:24","slug":"publications-scientifiques","status":"publish","type":"post","link":"https:\/\/rapportannuel.crim.ca\/?p=30","title":{"rendered":"Publications scientifiques"},"content":{"rendered":"\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"2000\" height=\"1333\" src=\"https:\/\/rapportannuel.crim.ca\/wp-content\/uploads\/2025\/05\/jfxavmxg.jpeg\" alt=\"\" class=\"wp-image-37\" srcset=\"https:\/\/rapportannuel.crim.ca\/wp-content\/uploads\/2025\/05\/jfxavmxg.jpeg 2000w, https:\/\/rapportannuel.crim.ca\/wp-content\/uploads\/2025\/05\/jfxavmxg-300x200.jpeg 300w, https:\/\/rapportannuel.crim.ca\/wp-content\/uploads\/2025\/05\/jfxavmxg-1024x682.jpeg 1024w, https:\/\/rapportannuel.crim.ca\/wp-content\/uploads\/2025\/05\/jfxavmxg-768x512.jpeg 768w, https:\/\/rapportannuel.crim.ca\/wp-content\/uploads\/2025\/05\/jfxavmxg-1536x1024.jpeg 1536w\" sizes=\"auto, (max-width: 2000px) 100vw, 2000px\" \/><\/figure>\n\n\n\n<p><strong>L&rsquo;ann\u00e9e 2024 a \u00e9t\u00e9 marqu\u00e9e par une s\u00e9rie de publications notables du CRIM dans des domaines divers tels que la reconnaissance \u00e9motionnelle multimodale, la v\u00e9rification du locuteur, l&rsquo;intelligence artificielle pour l&rsquo;accessibilit\u00e9, et l&rsquo;int\u00e9gration des standards g\u00e9ospatiaux. <\/strong><\/p>\n\n\n\n<p>Parmi les contributions majeures, on retrouve la pr\u00e9sentation de travaux innovants dans des conf\u00e9rences internationales prestigieuses telles que l&rsquo;IEEE CVPR, ISCA ICASSP, et NeurIPS. <br><br>En particulier, des \u00e9tudes sur l&rsquo;utilisation de l&rsquo;attention crois\u00e9e pour la fusion audio-visuelle dans la reconnaissance \u00e9motionnelle, ainsi que des recherches sur la robustesse des syst\u00e8mes de v\u00e9rification du locuteur face au bruit d&rsquo;\u00e9tiquetage, ont \u00e9t\u00e9 publi\u00e9es. <br><br>Le CRIM a \u00e9galement contribu\u00e9 \u00e0 la litt\u00e9rature scientifique sur les mod\u00e8les de diffusion pour la d\u00e9tection des hypertrucages et a particip\u00e9 activement aux discussions sur les enjeux \u00e9thiques de l&rsquo;intelligence artificielle, notamment lors de s\u00e9minaires sur l&rsquo;IA responsable. Ces r\u00e9alisations soulignent l&rsquo;engagement du CRIM \u00e0 mener des recherches de pointe en intelligence artificielle tout en s&rsquo;assurant que ces innovations r\u00e9pondent aux besoins soci\u00e9taux et \u00e9thiques actuels.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-text-color has-white-color has-alpha-channel-opacity has-white-background-color has-background is-style-dots\"\/>\n\n\n\n<h2 class=\"wp-block-heading\">Livre blanc sur l\u2019IA de confiance<\/h2>\n\n\n\n<p>Le CRIM a produit un <a href=\"https:\/\/www.crim.ca\/fr\/\">livre blanc sur l\u2019IA de confiance<\/a>, destin\u00e9 \u00e0 orienter les acteurs de l\u2019innovation dans l\u2019int\u00e9gration responsable de l\u2019intelligence artificielle. Structur\u00e9 en trois volets, ce document propose d\u2019abord une r\u00e9flexion sur les principes directeurs de l\u2019IA de confiance, en mettant en lumi\u00e8re les enjeux \u00e9thiques, techniques et sociaux li\u00e9s \u00e0 la fiabilit\u00e9 des syst\u00e8mes intelligents. Il offre ensuite un guide m\u00e9thodologique structur\u00e9, d\u00e9crivant les pratiques \u00e0 adopter \u00e0 chaque \u00e9tape du cycle de d\u00e9veloppement, de la planification au d\u00e9ploiement. Enfin, le livre blanc s\u2019appuie sur plusieurs \u00e9tudes de cas illustrant concr\u00e8tement l\u2019application de ces principes dans divers domaines, notamment la prise de d\u00e9cision automatis\u00e9e, l\u2019assurance qualit\u00e9, les assistants intelligents et la v\u00e9rification biom\u00e9trique.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-text-color has-white-color has-alpha-channel-opacity has-white-background-color has-background is-style-dots\"\/>\n\n\n\n<h2 class=\"wp-block-heading\">Publications scientifiques<\/h2>\n\n\n\n<h3 class=\"wp-block-heading\">Revues avec comit\u00e9 de lecture<\/h3>\n\n\n\n<p>Balafrej, I., Dahmane, M., \u201cEnhancing practicality and efficiency of deepfake detection\u201d. Scientific Report, 14(1), 31084 (2024). <a href=\"https:\/\/doi.org\/10.1038\/s41598-024-82223-y\">https:\/\/doi.org\/10.1038\/s41598-024-82223-y<\/a><\/p>\n\n\n\n<p>Rajasekhar, G. P., and Alam, J., \u201cIncongruity-Aware Cross-Modal Attention for Audio-Visual Fusion in Dimensional Emotion Recognition\u201d. IEEE Journal of Selected Topics in Signal Processing (JSTSP), June, 2024. DOI&nbsp;: <a href=\"https:\/\/doi.org\/10.1109\/JSTSP.2024.3422823\" target=\"_blank\" rel=\"noreferrer noopener\">10.1109\/JSTSP.2024.3422823<\/a><\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Actes de conf\u00e9rence, colloques et ateliers<\/h3>\n\n\n\n<p>Alam, J., Alam, Md Shahidul. \u201cOn the Influence of CNN-based Feature Learning Modules in Neural Speaker Verification Framework\u201d. In SPECOM,Crowne Plaza, Belgrade, Serbia, 25-28 November 2024. <a href=\"https:\/\/doi.org\/10.1007\/978-3-031-78014-1_12\">https:\/\/doi.org\/10.1007\/978-3-031-78014-1_12<\/a><\/p>\n\n\n\n<p><a href=\"https:\/\/www.crim.ca\/fr\/centre-de-recherche-informatique-de-montreal\/organisation\/repertoire-des-employes-et-collaborateurs\/jahangir-alam\">Alam, J.<\/a> et. Al. \u201cABC System Description for NIST SRE 2024\u201d. In NIST SRE 2024 Workshop, San Juan, Puerto Rico, p. 1-9, December 3-4, 2024.<\/p>\n\n\n\n<p>Charette-Migneault, F., Avery, R., Pondi, B., Omojola, J., Vaccari, S., Membari, P., &#8230; &amp; Sundwall, J. \u201cMachine Learning Model Specification for Cataloging Spatio-Temporal Models (Demo Paper)\u201d. In Proceedings of the 3rd ACM SIGSPATIAL International Workshop on Searching and Mining Large Collections of Geospatial Data, October 2024, pp. 36-39. <a href=\"https:\/\/doi.org\/10.1145\/3681769.3698586\">https:\/\/doi.org\/10.1145\/3681769.3698586<\/a><\/p>\n\n\n\n<p>Fathan, A. and Alam, J. \u201cSelf-supervised Speaker Verification Employing a Novel Clustering Algorithm\u201d. In Proceedings of the <em>IEEE ICASSP<\/em>, Seoul, South Korea, April 24-19, 2024. DOI&nbsp;: <a href=\"https:\/\/doi.org\/10.1109\/ICASSP48485.2024.10447101\" target=\"_blank\" rel=\"noreferrer noopener\">10.1109\/ICASSP48485.2024.10447101<\/a><\/p>\n\n\n\n<p>Fathan, A. and Alam, J. \u201cAn investigative study of the effect of several regularization techniques on label noise robustness of self-supervised speaker verification systems\u201d. In Proceedings of the ISCA ODYSSEY Speaker and Language Recognition Workshop, Quebec City, Quebec, Canada, 18-21 June 2024. DOI&nbsp;: <a href=\"http:\/\/dx.doi.org\/10.21437\/odyssey.2024-7\" target=\"_blank\" rel=\"noreferrer noopener\">10.21437\/odyssey.2024-7<\/a><\/p>\n\n\n\n<p>Fathan, A. and Alam, J. \u201cContrastive Information Maximization Clustering for Self-Supervised Speaker Recognition\u201d. In Proceedings of the IEEE Conference on Artificial Intelligence (IEEE CAI), Singapore, 25-27 June 2024. DOI Bookmark&nbsp;: <a href=\"https:\/\/doi.ieeecomputersociety.org\/10.1109\/CAI59869.2024.00077\">10.1109\/CAI59869.2024.00077<\/a><\/p>\n\n\n\n<p>Fathan, A. and Alam, J. \u201cOn the influence of metric learning loss functions for robust self-supervised speaker verification to label noise\u201d. In Proceedings of the IEEE Conference on Artificial Intelligence (IEEE CAI), Singapore, 25-27 June 2024. DOI&nbsp;: <a href=\"https:\/\/doi.org\/10.1109\/CAI59869.2024.00186\" target=\"_blank\" rel=\"noreferrer noopener\">10.1109\/CAI59869.2024.00186<\/a><\/p>\n\n\n\n<p>Fathan, A. and Alam, J. \u201cOn the impact of several regularization techniques on label noise robustness of self-supervised speaker verification systems\u201d. In Proceedings of the ISCA INTERSPEECH, Kos Island, Greece, September 1-5, 2024.<\/p>\n\n\n\n<p>Fathan, A., Zhu, X, and Alam, J. \u201cEnhanced label noise robustness through early adaptive filtering for the self-supervised speaker verification task\u201d. In NeurIPS 4th Efficient Natural Language and Speech Processing Workshop, Vancouver, Canada, 10-15 December 2024.<\/p>\n\n\n\n<p>Ganguly, R., Dian Bah, M., Dahmane, M. \u201cDiffusion Models as a Representation Learner for Deepfake Image Detection\u201d. In Proceedings Pattern Recognition: 27<sup>th<\/sup> International Conference, ICPR 2024, Kolkata, India, December 1-5, 2024, Proceedings, Part XXI. <a href=\"https:\/\/doi.org\/10.1007\/978-3-031-78305-0_15\">https:\/\/doi.org\/10.1007\/978-3-031-78305-0_15<\/a><\/p>\n\n\n\n<p>Gupta, V. (2025). \u201cAdvances in OpenASR21 Evaluation with Increased Temporal Resolution for Speech Self-supervised Learning Models\u201d. In Karpov, A., Deli\u0107, V. (eds) Speech and Computer. SPECOM 2024. Lecture Notes in Computer Science(), vol 15299. Springer, Cham. <a href=\"https:\/\/doi.org\/10.1007\/978-3-031-77961-9_5\">https:\/\/doi.org\/10.1007\/978-3-031-77961-9_5<\/a><\/p>\n\n\n\n<p>Moubtahij, A., Cummings, C.-W., Handan, A., Galy, E., Charton, E. \u00ab&nbsp;Participation du CRIM \u00e0 DEFT 2024&nbsp;: Utilisation de petits mod\u00e8les de Langue pour des QCMs dans le domaine m\u00e9dical&nbsp;\u00bb. In Actes du D\u00e9fi Fouille de Textes@TALN 2024, pages 11\u201322, Toulouse, France. ATALA and AFPC. <a href=\"https:\/\/aclanthology.org\/2024.jeptalnrecital-deft.2.pdf\">https:\/\/aclanthology.org\/2024.jeptalnrecital-deft.2.pdf<\/a><\/p>\n\n\n\n<p>Praveen Rajasekhar, G. and Alam, J. \u201cDynamic Cross Attention for Audio-Visual Person Verification\u201d. Accepted for publication in the IEEE Conference on Automatic Face and Gesture Recognition, Istanbul, Turkey, 27-31 May 2024. <a href=\"https:\/\/doi.org\/10.48550\/arXiv.2403.04661\">https:\/\/doi.org\/10.48550\/arXiv.2403.04661<\/a><\/p>\n\n\n\n<p>Praveen Rajasekhar, G. and Alam, J. \u201cAudio-Visual Person Verification based on Recursive Fusion of Joint Cross-Attention\u201d. Accepted for publication in the IEEE Conference on Automatic Face and Gesture Recognition, Istanbul, Turkey, 27-31 May 2024. <a href=\"https:\/\/doi.org\/10.48550\/arXiv.2403.04654\">https:\/\/doi.org\/10.48550\/arXiv.2403.04654<\/a><\/p>\n\n\n\n<p>Praveen Rajasekhar, G. and Alam, J. \u201cRecursive Joint Cross-Modal Attention for Multimodal Fusion in Dimensional Emotion Recognition\u201d. In the IEEE Computer Vision and Pattern Recognition (IEEE CVPR) Workshop (6th ABAW), Seattle, USA, 17-21 June 2024. <a href=\"https:\/\/doi.org\/10.48550\/arXiv.2403.13659\">https:\/\/doi.org\/10.48550\/arXiv.2403.13659<\/a><\/p>\n\n\n\n<p>Praveen Rajasekhar, G. and Alam, J. \u201cCross-Attention is not Always Needed: Dynamic Cross-Attention for Audio-Visual Dimensional Emotion Recognition\u201d. In the IEEE Conference on Multimedia and Expo (IEEE ICME), Niagara Falls, Canada, 15-19 July 2024. <a href=\"https:\/\/doi.org\/10.48550\/arXiv.2403.19554\">https:\/\/doi.org\/10.48550\/arXiv.2403.19554<\/a><\/p>\n\n\n\n<p>Praveen Rajasekhar, G. and Alam, J. \u201cLess is Enough: Adapting Pre-trained Vision Transformers for Audio-Visual Speaker Verification\u201d. In NeurIPS 4th Efficient Natural Language and Speech Processing Workshop, Vancouver, Canada, 10-15 December 2024.<\/p>\n\n\n\n<p>Praveen Rajasekhar, G., Alam, J. \u201cCross-Modal Transformers for Audio-Visual Person Verification\u201d. In Proceedings ofThe Speaker and Language Recognition Workshop (Odyssey 2024), pp. 240-246. DOI:<a href=\"http:\/\/dx.doi.org\/10.21437\/odyssey.2024-34\" target=\"_blank\" rel=\"noreferrer noopener\">10.21437\/odyssey.2024-34<\/a><\/p>\n\n\n\n<p>Raymond, C., Ratt\u00e9, S., &amp; Daoust, M. K. \u201cMerging Roles and Expertise: Redefining Stakeholder Characterization in Explainable Artificial Intelligence\u201d. In 2024 34th International Conference on Collaborative Advances in Software and COmputiNg (CASCON) (pp. 1-7). IEEE, november 2024.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Rapport technique<\/h3>\n\n\n\n<p>Lalonde, M., Boulianne, G., Rutherford, N., Beaulieu, M., Ghodrati, H., Dahmane, M., \u00ab&nbsp;D\u00e9sinformation Visuelle et Multimodale: Analyse, enjeux, solutions&nbsp;\u00bb, Montr\u00e9al, 86 pages, mars 2025.<\/p>\n\n\n\n<p>Morsli, A., \u00ab&nbsp;D\u00e9veloppement de composantes d\u2019extraction de contenu s\u00e9mantique \u00e0 partir d\u2019enregistrements audio, en vue de leur application \u00e0 la lutte contre la d\u00e9sinformation&nbsp;\u00bb, 10 avril 2024.<\/p>\n\n\n\n<p>Praveen Rajasekhar, G. and Alam, J. \u201cInconsistency-Aware Cross-Attention for Audio-Visual Fusion in Dimensional Emotion Recognition\u201d. In arXiv, June 30, 2024. <a href=\"https:\/\/doi.org\/10.48550\/arXiv.2405.12853\">https:\/\/doi.org\/10.48550\/arXiv.2405.12853<\/a><\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Livre blanc<\/h3>\n\n\n\n<p>Sotir, M., Galy, \u00c9., Boulianne, G., Charton, \u00c9., Charette-Migneault, F., Dahmane, M., Frenette, X., Ghodrati, H., Gierschendorf, J., Handan, A., Lalonde, M., Lyman, J., Moubtahij, A., Queudot, M., Raymond, C., Rebout, L., Savard, M., \u00ab&nbsp;L\u2019IA de Confiance &#8211; Des Principes \u00e0 la Pratique&nbsp;\u00bb, 88 pages, septembre 2024.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Billet<\/h3>\n\n\n\n<p>Blanchard, J., Quand l\u2019IA transforme le secteur de l\u2019\u00e9nergie. Chronique dans Les Connecteurs No. 10, 15 janvier 2025.<\/p>\n\n\n\n<p>Charton, \u00c9., \u00c9thique, d\u00e9mocratie et encadrement de l\u2019IA. Chronique dans <em>Les Connecteurs No. 7<\/em>, 20 novembre 2024.<\/p>\n\n\n\n<p>Charton, \u00c9., L\u2019IA est-elle bonne pour votre sant\u00e9?. Chronique dans <em>Les Connecteurs No. 13<\/em>, 27 f\u00e9vrier 2025.<\/p>\n\n\n\n<p>Charton, \u00c9., Les enjeux de l\u2019intelligence artificielle dans le contexte municipal. <em>G\u00e9nial La revue, Dossier sp\u00e9cial<\/em>, Montr\u00e9al, printemps 2025.<\/p>\n\n\n\n<p>Gierschendorf, J., Conception des syst\u00e8mes d\u2019aide \u00e0 la d\u00e9cision (SAD) dans l\u2019industrie. Chronique dans<em> Les Connecteurs No. 6<\/em>, 6 novembre 2024.<\/p>\n\n\n\n<p>Ghodrati, H., Une r\u00e9volution cin\u00e9matographique. Chronique dans <em>Les Connecteurs No. 14<\/em>, 24 mars 2025.<\/p>\n\n\n\n<p>Habas, M.-P., De l\u2019acad\u00e9mie \u00e0 l\u2019entreprise&nbsp;: R\u00e9ussir l\u2019op\u00e9rationnalisation de l\u2019IA. Chronique dans <em>Les Connecteurs No. 2<\/em>, 11 septembre 2024.<\/p>\n\n\n\n<p>Raymond, C., L\u2019IA&nbsp;: l\u2019acteur silencieux des \u00e9lections. Chronique dans <em>Les Connecteurs No. 9<\/em>, 19 d\u00e9cembre 2024.<\/p>\n\n\n\n<p>Savard, M., L\u2019\u00e8re quantique arrive-t-elle enfin? Chronique dans <em>Les Connecteurs No. 8<\/em>, 5 d\u00e9cembre 2024.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Diaporama<\/h3>\n\n\n\n<p>Charrette Migneault, F. \u201cStandards Demo Showcase: Open Science Persistent Demonstrator\u201d. 129<sup>th<\/sup> OGC Member Meeting, Montr\u00e9al, June 19, 2024. DOI:<a href=\"http:\/\/dx.doi.org\/10.13140\/RG.2.2.11244.58245\" target=\"_blank\" rel=\"noreferrer noopener\">10.13140\/RG.2.2.11244.58245<\/a><\/p>\n\n\n\n<p>Charette Migneault, F., \u201cPerspectives on the Integration of OGC Standards to Improve Interoperability of Open Science Data Processing Workflows\u201d. ESIP Meeting, Asheville, NC, July 22-26, 2024. DOI:<a href=\"http:\/\/dx.doi.org\/10.13140\/RG.2.2.19252.26243\" target=\"_blank\" rel=\"noreferrer noopener\">10.13140\/RG.2.2.19252.26243<\/a><\/p>\n\n\n\n<p>Charrette Migneault, F. \u201cOGC Testbed-20 Demonstration Days&nbsp;: CRIM Demonstration\u201d. Open Geospatial Consortium Testbed-20, Demo Days. February 24-25, 2025.<\/p>\n\n\n\n<p>Charrette Migneault, F. \u201cTestbed-20 GeoDataCubes Integration Test Results: CRIM Demonstration\u201d. The 131<sup>st<\/sup> OGC Member Meeting, Roma, Lazio, Italy, March 3-6, 2025.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Rapport de veille<\/h3>\n\n\n\n<p>Rutherford, N., Lalonde, M. \u201cNavigating the Climate Crisis: Information Integrity and the Challenge of Climate Mis\/Disinformation\u201d. Rapport de veille du Laboratoire sur l\u2019int\u00e9grit\u00e9 de l\u2019information, octobre 2024.<\/p>\n\n\n\n<div class=\"wp-block-group is-layout-flow wp-block-group-is-layout-flow\">\n<hr class=\"wp-block-separator has-text-color has-white-color has-alpha-channel-opacity has-white-background-color has-background is-style-dots\"\/>\n\n\n\n<h2 class=\"wp-block-heading\">Saviez-vous que?<\/h2>\n\n\n\n<p>En 2017-2018, le CRIM d\u00e9veloppe des outils pr\u00e9dictifs le Service s\u00e9curit\u00e9 incendie de Montr\u00e9al (SIM) afin de pr\u00e9dire le temps n\u00e9cessaire pour arriver sur les lieux d\u2019une alerte, \u00e0 partir d\u2019une caserne, pour une unit\u00e9 du SIM.&nbsp;<\/p>\n<\/div>\n\n\n\n<div class=\"wp-block-columns is-not-stacked-on-mobile is-layout-flex wp-container-core-columns-is-layout-28f84493 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\">\n<p class=\"has-text-align-right\">\u2190 <a href=\"https:\/\/rapportannuel.crim.ca\/?p=27\">Pr\u00e9c\u00e9dent<\/a><\/p>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\">\n<p><a href=\"https:\/\/rapportannuel.crim.ca\/\" data-type=\"page\" data-id=\"281\">Accueil<\/a> \u2192<\/p>\n<\/div>\n<\/div>\n\n\n\n<p><\/p>\n","protected":false},"excerpt":{"rendered":"<p>L&rsquo;ann\u00e9e 2024 a \u00e9t\u00e9 marqu\u00e9e par une s\u00e9rie de publications notables du CRIM dans des domaines divers tels que la reconnaissance \u00e9motionnelle multimodale, la v\u00e9rification du locuteur, l&rsquo;intelligence artificielle pour l&rsquo;accessibilit\u00e9, et l&rsquo;int\u00e9gration des standards g\u00e9ospatiaux. <\/p>\n","protected":false},"author":2,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"slim_seo":{"title":"Publications scientifiques - Rapport annuel du CRIM, 2024-2025","description":"L'ann\u00e9e 2024 a \u00e9t\u00e9 marqu\u00e9e par une s\u00e9rie de publications notables du CRIM dans des domaines divers tels que la reconnaissance \u00e9motionnelle multimodale, la v\u00e9rif"},"footnotes":""},"categories":[2],"tags":[],"class_list":["post-30","post","type-post","status-publish","format-standard","hentry","category-uncategorized"],"_links":{"self":[{"href":"https:\/\/rapportannuel.crim.ca\/index.php?rest_route=\/wp\/v2\/posts\/30","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/rapportannuel.crim.ca\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/rapportannuel.crim.ca\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/rapportannuel.crim.ca\/index.php?rest_route=\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/rapportannuel.crim.ca\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=30"}],"version-history":[{"count":6,"href":"https:\/\/rapportannuel.crim.ca\/index.php?rest_route=\/wp\/v2\/posts\/30\/revisions"}],"predecessor-version":[{"id":370,"href":"https:\/\/rapportannuel.crim.ca\/index.php?rest_route=\/wp\/v2\/posts\/30\/revisions\/370"}],"wp:attachment":[{"href":"https:\/\/rapportannuel.crim.ca\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=30"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/rapportannuel.crim.ca\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=30"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/rapportannuel.crim.ca\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=30"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}