Chronic gastroduodenal symptoms disproportionately affect females of childbearing age; however, the effect of menstrual cycling on gastric electrophysiology is poorly defined. To establish the effect of the menstrual cycle on gastric electrophysiology, healthy subjects underwent non-invasive Body Surface Gastric Mapping (BSGM; 8x8 array), with validated symptom logging App (Gastric Alimetry, New Zealand). Participants were premenopausal females in follicular (n=26) and luteal phases (n=18). Postmenopausal females (n=30) and males (n=51) were controls. Principal gastric frequency (PGF), BMI-adjusted amplitude, Gastric Alimetry Rhythm Index (GA-RI), fasted-fed amplitude ratio (ff-AR), meal response curves, and symptom burden were analysed. Menstrual cycle-related electrophysiological changes were then transferred to an established anatomically-accurate computational gastric fluid dynamics model (meal viscosity 0.1 Pas), to predict the impact on gastric mixing and emptying. PGF was significantly higher in the luteal vs. follicular phase (mean 3.21 cpm, SD (0.17) vs. 2.94 cpm, SD (0.17), p<0.001) and vs. males (3.01 cpm, SD (0.2), p<0.001). In the computational model, this translated to 8.1% higher gastric mixing strength and 5.3% faster gastric emptying for luteal versus follicular phases. Postmenopausal females also exhibited higher PGF than females in the follicular phase (3.10 cpm, SD (0.24) vs. 2.94 cpm, SD (0.17), p=0.01), and higher BMI-adjusted amplitude (40.7 uV (33.02-52.58) vs. 29.6 uV (26.15-39.65), p<0.001), GA-RI (0.60 (0.48-0.73) vs. 0.43 (0.30-0.60), p=0.005), and ff-AR (2.51 (1.79-3.47) vs. 1.48 (1.21-2.17), p=0.001) than males. There were no differences in symptoms. These results define variations in gastric electrophysiology with regard to human menstrual cycling and menopause.
Background: There are many barriers that hinder breast cancer (BC) early detection such as social, demographic, and economic factors. We examined the barriers to early detection. Methods: PubMed, Scopus, and Web of Science databases were chosen to conduct a comprehensive literature search. The Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) was used to select the relevant studies on decreased rate of BC screening, delayed presentation of BC, and advanced stage diagnosis of BC. Results: The literature demonstrates that several determinants had a significant impact on the delay in seeking medical help, rate of performing breast cancer screening (BCS), and stage at diagnosis of BC. Younger age, rural residence, being non-white, being single, low socioeconomic status, absence of medical insurance, having no paid job, low educational level, positive family history of BC, and having TNBC or HER2E BC subtypes were significantly associated with presenting at advanced stages, decreased rate of BCS, and delayed presentation. Meanwhile, the associations between BC and BMI, parity, religion, and menopausal status were underexamined in the literature. Conclusion: Promoting early detection of BC should be taking the sociodemographic disparities into consideration. To address these disparities, raising public awareness, implementing universal health coverage (UHC), and increasing government expenditure on health and education are needed, especially among vulnerable societies.
Background: The historical view that pregnancy intention is dichotomous (i.e., intending or not intending pregnancy), and the notion that all individuals not intending pregnancy should be using highly effective contraceptive methods, oversimplifies how we view contraceptive decision-making. To better understand this, we studied contraceptive congruence as an alternative, 3-level measure describing methods as very congruent, somewhat congruent, or incongruent with ones individual attitudes about becoming pregnant. Methods: Secondary data analysis included 982 MyNewOptions study participants who were not intending pregnancy within the next year. The cross-sectional survey assessed attitudes about how important it is to avoid pregnancy, how pleased/upset one would be if pregnant, and current contraceptive method use. Participant answers to attitudinal questions and effectiveness of current contraceptive method were used to determine congruence categories. Results: Contraceptive methods included LARC (8%), other prescription methods (50%), non-prescription methods (30%), and no method (12%). Methods for 23% of participants were very congruent, 48% somewhat congruent, and 29% incongruent with attitudes about becoming pregnant. Contraceptive congruence was significantly associated with contraceptive satisfaction in bivariate analysis. Predictors of contraceptive congruence included being married or living with partner, full-time employment, and intending future pregnancy in the next 1-5 years. Conclusion: Contraceptive congruence is a novel measure that acknowledges pregnancy ambivalence and is associated with higher contraceptive satisfaction scores. Future contraception research should strive for robust, patient-centered measures of contraceptive use that acknowledge the complex attitudes affecting individual contraceptive behavior and satisfaction.
OBJECTIVES: Diagnosis of smell/taste dysfunction is necessary for appropriate medical care. This study examines factors affecting testing and diagnosis of smell/taste disorders . METHODS: The online USA Smell and Taste Patient Survey was made available to US patients with smell/taste disorders between April 6-20, 2022. 4,728 respondents were included. RESULTS: 1,791 (38%) patients reported a documented diagnosis. Patients most often saw family practitioners (34%), otolaryngologists (20%), and Taste/Smell clinics (6%) for smell/taste dysfunction. 64% of patients who went to Taste/Smell clinics received smell testing, followed by 39% of patients who saw otolaryngologists, and 31% of patients who saw family practitioners. Factors associated with increased odds of diagnosis included age (25-39 years (OR 2.97, 95% CI [2.25, 3.95]), 40-60 (OR 3.3, 95% CI [2.56, 4.52]), and >60 (OR 4.25, 95% CI [3.21, 5.67]) vs. 18-24 years), male gender (OR 1.26, 95% CI [1.07, 1.48]), insurance status (private (OR 1.61, 95% CI[1.15, 2.30]) or public (OR 2.03, 95% CI [1.42, 2.95]) vs. uninsured), perception of their family practitioner to be knowledgeable (OR 2.12, 95% CI [1.16, 3.90]), otolaryngologic evaluation (OR 6.17, 95% CI [5.16, 7.38]), and psychophysical smell testing (OR 1.77, 95% CI [1.42, 2.22]). CONCLUSION: Psychophysical testing, otolaryngologic evaluation, patient assessment of family practitioner knowledge level, insurance, age, and gender are significant factors in obtaining smell/taste dysfunction diagnosis. This study identifies barriers to diagnosis including lack of insurance or access to specialist evaluation and highlights the importance of educating family practitioners in diagnosis and management of patients with smell/taste disorders.
Background Iron deficiency anemia (IDA) is a prevalent hematological complication associated with gastrointestinal (GI) cancers due to an increased loss of iron and decreased iron absorption. We estimated the efficacy of parenteral iron on hemoglobin levels, blood transfusion needs and overall quality of life in patients with GI malignancies. Methods In this systematic review, we used PubMed, Cochrane, EMBASE, CINHAL and Scopus to conduct an electronic search from January 1, 2010 to March 24, 2022 with no language or study design restrictions. Studies were included if they discussed IDA, GI neoplasms, use of iron supplementation (with or without erythropoietin-stimulating agents [ESAs]), defined anemia and had a patient population of adults. Studies were excluded if were published before 2010. We assessed the efficacy of parenteral iron in comparison to other iron supplementation methods when treating IDA in GI cancer patients. The Cochrane Risk of Bias Tool 2 (RoB 2) and the Risk Of Bias In Non-randomized Studies of Interventions (ROBINS-I) assessment tools were used to assess the quality of the included studies. Moreover, the Cochrane Effective Practice and Organization data collection form was used to collect pertinent study information. Results Our search yielded 3,156 studies across all databases. With the exclusion of duplicates, ineligible study designs, as well as studies that did not pass abstract and full-text screening, 17 studies were included in our final analysis (4 randomized control trials; 13 non-randomized studies). Of the 13 studies evaluating hemoglobin (Hgb) response, seven studies found an increase in Hgb levels when patients were treated with IV iron. The 8 studies evaluating red blood cell (RBC) transfusion rates found no significant differences in RBC transfusion needs when treated with IV iron. Studies analyzing health related outcomes typically found an increase quality of life and decreased post-operative complications. Discussion This review reveals the improved outcomes of IDA in GI cancer patients treated with IV iron instead of other iron supplementation methods. Timely diagnosis and appropriate IDA management can greatly improve quality of life in this patient population, especially if myelosuppressive chemotherapy is required. Our systematic review presents some limitations due to heterogenous interventions in the randomized control trials, the varying time points of data collection in each study, and the use of small sample sizes.
Background: Management of localized pancreatic cancer is variable. We describe the development of a neoadjuvant therapy pathway (NATP) to standardize care across a large healthcare system. Methods: We conducted an IRB-approved retrospective analysis of NATP patients between June 2019 and March 2022. The primary endpoint was NATP completion, and secondary endpoints included overall survival (OS) and quality measures. Results: Fifty-nine patients began NATP, median age 70, locally advanced 44.1%. Median time on NATP was 6.1 months. The initial chemotherapy was FOLFIRINOX (64.2%) and gemcitabine/nab-paclitaxel (GnP; (35.6%)) followed by radiation in 32 (54.2%) patients. Forty-four (74.6%) completed the NATP and 30 (50.8%) underwent surgical exploration with 86.7% undergoing successful resection (61.5% R0, 23.1% R1) while 14 remained unresectable. NATP completion was associated with increased likelihood of resection (p<0.001). At median follow-up of 13.4 months, median OS was 20.9 months (95% CI 13.3-28.5) and 1- and 2-year OS was 82.5% and 49.7%. NATP completion resulted in improved OS with median OS not reached and 1- and 2-year OS of 89.7% and 59.4% (p=0.004). Median time to NATP start was 20 days after MDR and median time to surgery was 35 days. Age, ECOG, surgical stage, chemotherapy regimen and NATP completion were significant univariable predictors of OS with ECOG status remaining significant on multivariable analysis. Conclusion: Our outcomes provide a baseline for future guidance in improving care across a large system. Efforts to complete NATP and improve patient ECOG may result in more patients undergoing surgery and improve survival.
Background: The world of work is undergoing profound changes towards agile, flexible, democratic, and digital forms of work, so called New Work (NW). The COVID-19 pandemic accelerated these changes and confronted the working world with new challenges. Effects on employee health are ambivalent and remain unclear. Moreover, there is a lack of evidence as to whether existing occupational health management (OHM) measures meet the needs of employees working in new forms of work. Methods/Design: This prospective mixed-method project will include four substudies to identify different NW forms, resulting health risk, benefits and protective factors in subgroups, and derive target group-specific OHM services. In the four substudies, the following methods will be used: (1) a scoping review, semi-standardized interviews, and an online survey, (2) a systematic review, an online survey, an expert workshop and qualitative interviews, (3) workplace observations, and (4) expert workshops. Recommendations for action will be derived from the findings of all substudies and summarized in a checklist for OHM in NW settings. Conclusion: Findings will expand the state of knowledge about NW settings and associated health effects. The development of an evidence-based checklist for target group-specific identification of NW settings and associated health risks, benefits and protective factors can be used as a basis for action regarding OHM in companies. The findings can provide guidance on how future OHM services should be designed to meet the needs of employees.
Objective: Large-language models (LLMs) in healthcare have the potential to propagate existing biases or introduce new ones. For people with epilepsy, social determinants of health are associated with disparities in access to care, but their impact on seizure outcomes among those with access to specialty care remains unclear. Here we (1) evaluated our validated, epilepsy-specific LLM for intrinsic bias, and (2) used LLM-extracted seizure outcomes to test the hypothesis that different demographic groups have different seizure outcomes. Methods: First, we tested our LLM for intrinsic bias in the form of differential performance in demographic groups by race, ethnicity, sex, income, and health insurance in manually annotated notes. Next, we used LLM-classified seizure freedom at each office visit to test for outcome disparities in the same demographic groups, using univariable and multivariable analyses. Results: We analyzed 84,675 clinic visits from 25,612 patients seen at our epilepsy center 2005-2022. We found no differences in the accuracy, or positive or negative class balance of outcome classifications across demographic groups. Multivariable analysis indicated worse seizure outcomes for female patients (OR 1.33, p = 3x10-8), those with public insurance (OR 1.53, p = 2x10-13), and those from lower-income zip codes (OR [≥] 1.22, p [≤] 6.6x10-3). Black patients had worse outcomes than White patients in univariable but not multivariable analysis (OR 1.03, p = 0.66). Significance: We found no evidence that our LLM was intrinsically biased against any demographic group. Seizure freedom extracted by LLM revealed disparities in seizure outcomes across several demographic groups. These findings highlight the critical need to reduce disparities in the care of people with epilepsy. Keywords: Electronic Health Record, Natural Language Processing, Clinical Informatics, Health Disparities
Little is known about the role of noncoding regions in the etiology of autism spectrum disorder (ASD). We examined three classes of noncoding regions: Human Accelerated Regions (HARs), which show signatures of positive selection in humans; experimentally validated neural Vista Enhancers (VEs); and conserved regions predicted to act as neural enhancers (CNEs). Targeted and whole genome analysis of >16,600 samples and >4900 ASD probands revealed that likely recessive, rare, inherited variants in HARs, VEs, and CNEs substantially contribute to ASD risk in probands whose parents share ancestry, which enriches for recessive contributions, but modestly, if at all, in simplex family structures. We identified multiple patient variants in HARs near IL1RAPL1 and in a VE near SIM1 and showed that they change enhancer activity. Our results implicate both human-evolved and evolutionarily conserved noncoding regions in ASD risk and suggest potential mechanisms of how changes in regulatory regions can modulate social behavior.
Patient-specific urine-derived cells are valuable tools for biomedical research and personalised medicine since collection is non-invasive and easily repeated, unlike biopsies. The full potential of urine-derived cells remains untapped, however, due to the short shelf life of samples and necessity for prompt centrifugation. This study aims to address this limitation by evaluating a novel filtration-based Cell Catcher device and comparing its efficiency to centrifugation. We obtained urine from 18 tubulopathy patients and using paired analysis demonstrated that the Cell Catcher device significantly improves the success rate of isolating viable renal cells, and the cell yield. The findings were confirmed in a second independent study, using 44 samples obtained from healthy controls or patients with Bardet-Biedl syndrome or tubulopathies, where colonies were established in 90% of the Cell Catcher-processed samples. Cultured cells displayed a variety of morphologies and expressed markers of podocyte and proximal tubule cells. Collectively, we describe an improved, point-of-care methodology to obtain live patient cells from urine using a filtration technique, with potential personalised medicine applications in nephrology, regenerative medicine, and urological cancers.
Background Severe paresis of the contralesional upper extremity is one of the most common and debilitating post-stroke impairments. The need for cost-effective high-intensity training is driving the development of new technologies, which can complement and extent conventional therapies. Apart from established methods using robotic devices, immersive virtual reality (iVR) systems hold promise to provide cost-efficient high-intensity arm training. Objective We investigated whether iVR-based arm training yields at least equivalent effects on upper extremity function as compared to a robot-assisted training in stroke patients with severe arm paresis. Methods 52 stroke patients with severe arm paresis received a total of ten daily group therapy sessions over a period of three weeks, which consisted of 20 minutes of conventional therapy and 20 minutes of either robot-assisted (ARMEOSpring) or iVR-based (CUREO) arm training. Changes in upper extremity function was assessed using the Action Research Arm Test (ARAT) and user acceptance was measured with the User Experience Questionnaire (UEQ). Results iVR-based training was not inferior to robot-assisted training. We found that 84% of patients treated with iVR and 50% of patients treated with robot-assisted arm training showed a clinically relevant improvement of upper extremity function. This difference could neither be attributed to differences between the groups regarding age, gender, duration after stroke, affected body side or ARAT scores at baseline, nor to differences in the total amount of therapy provided. Conclusion The present study results show that iVR-based arm training seems to be a promising addition to conventional therapy. Potential mechanisms by which iVR unfolds its effects are discussed.
Background: Scar size is critical to left ventricular (LV) remodeling and adverse outcomes following myocardial infarction (MI). Late Gadolinium-enhancement (LGE) in cardiac magnetic resonance imaging is the gold standard for assessing MI size. Texture-based probability mapping (TPM) is a novel machine learning-based analysis of LGE images. This proof-of-concept study investigates the potential clinical implications of temporal changes in TPM during the first year following an acute revascularized MI. Methods: 41 patients with first-time acute ST-elevation MI were included in this study. All patients had a single-vessel disease and were successfully revascularized by primary percutaneous coronary intervention. LGE images were obtained two days, one week, two months, and one year post-MI. MI size by TPM was compared with manual LGE-based MI calculation, LV remodeling, and biomarkers. Results: TPM showed a significant increase in infarct size from the second month through the first year (p<0.01). MI size estimated by TPM at all different time points demonstrated strong correlations with peak Troponin T levels. At one week, TPM assessment correlated positively with maximum C-reactive protein (r=0.54, p<0.01), and at two months, TPM positively correlated with N-Terminal Pro Brain Natriuretic Peptide. Conclusion: This proof-of-concept study suggests that TPM may provide additional information to conventional LGE-based MI analysis of scar formation, LV remodeling, and biomarkers following an acute revascularized MI.
Background: In recent years, Cameroon's Expanded Program on Immunization (EPI) has witnessed a sizable decline in its performance. Many stakeholders have cited weaknesses in human resources as one of the major drivers for the observed trend. In a bid to better understand and redress this situation, the EPI and its partners conducted a Training Needs Assessment (TNA) among central, regional, and district staff. The assessment aimed to quantify and characterize core capability gaps and leverage the findings to design an evidence-based capacity-building plan. Methods: A descriptive cross-sectional study was conducted using aggregated data from a survey carried out among EPI staff from May to September 2016 across Cameroons health pyramid - central, regional, and district levels; and analysis was done using Microsoft Office Excel 2016. Results: Over half of EPI staff had worked for less than three years in their current post, and roughly three-third of them did not receive pre-service training on vaccination. Additionally, about half of them had not received any form of in-service training on immunization. Supportive supervision was the most frequently cited topic for training, with a surprisingly higher need at the central level (80%). Financial incentive was not a primary motivating factor for learning. Approximately half of the respondents at all levels were not aware of onboarding materials. Most of the respondents identified multiple meetings, high staff turnover, and competing priorities as major barriers to learning. Only about half of surveyed staff reported participating in performance reviews, with nearly half of these reviews conducted when an opportunity arose. Conclusion: There are still many gaps challenging the EPI goal achievement in Cameroon. Sustainably addressing these issues will require a comprehensive framing of capability-building activities. High-quality EPI human resources will boost the country's vaccination performance and contribute to the reduction of infant mortality in the country.
About 47% US adults have hypertension(HTN).Prevalence is worse in certain distributions with top quartile in the Southeast.In a subset of patients who present with STEMI,prevalence is 30-40%.An interesting further subset of patients are those who presented as STEMI-alert during COVID-19.COVID-19 wreaked havoc on the world with its first confirmed case in 9January2020.International followed by national transport became limited,followed by announcement of a pandemic.Quarantines and lockdowns were set in place,especially in March2020,to prevent epidemiologic spread.We were interested to observe if STEMI-alerts during this period presented with worse background disease compared to years 2019 and 2021,years before and after COVID-lockdown and peak,including if prevalence of background HTN increased in the population. We obtained patient demographics and risk factors for 1001 adults who were STEMI-activated from 1January2019 to 31December2021 at five sites in Southwest Ohio and compared years 2019 and 2021 to 2020. For 1001 STEMI-alert patients,244 patients(72.6%) had HTN in 2019, 250(78.9%) in 2020 and 261(75.0%) in 2021.Overall prevalence over three years was 755(75.4%). Compared to 2020, 2019 prevalence was not significant(p=0.12)(OR 0.72)(CI 0.47,1.09).Neither was 2021(p=0.25)(OR 0.78)(CI 0.51,1.19). STEMI-alert patients at our institution appeared to have higher overall prevalence of HTN than reported nationally by the CDC,75.4% versus 47%.These may have self-selected for disease severity by STEMI-alert status.HTN may also be of higher prevalence here,associated with poor disease detection or patient compliance issues when antihypertensives are prescribed.HTN prevalence did not have statistical significance across observed years despite our hypothesis that patients would present with more background cardiac comorbidities such as HTN.
Aims/hypothesis: Exogenous glucagon-like peptide 1 (GLP-1) infusion lowers endogenous glucose production (EGP) in euglycemic or hyperglycemic settings. Previously, we have shown that prandial EGP during insulin-induced hypoglycemia is smaller in non-diabetic subjects with gastric bypass (GB) and sleeve gastrectomy (SG), where prandial GLP-1 concentrations are increased by 5-10 fold compared to non-operated controls. The goal of this study was to determine the effect of endogenous GLP-1 on prandial counterregulatory response to hypoglycemia. Methods: Glucose fluxes and islet-cell and gut hormone responses before and after mixed-meal ingestion were compared among 8 subjects with prior GB, 7 with prior SG, and 5 matched non-surgical controls during a hyperinsulinemic (120 mU/min/m2) hypoglycemic (~3.2 mmol/l) clamp with and without a specific GLP-1 receptor (GLP-1R) antagonist exendin-(9-39) (Ex-9). Results: Before meal ingestion, plasma glucagon and glucose fluxes were similar among 3 groups. GLP-1R blockade had no effect on insulin secretion or insulin action before or after meal ingestion whereas prandial glucagon was enhanced in all 3 groups (P < 0.05). Ex-9 infusion raised prandial EGP response to hypoglycemia in surgical groups (P < 0.05) but decreased this parameter in controls (P = 0.08 for interaction). The rates of systemic appearance of ingested glucose or prandial glucose utilization did not differ among 3 groups and between studies with and without Ex-9 infusion. Conclusions/interpretation: Under hypoglycemic condition, the glucagonostatic effect, but not insulinotropic action of GLP-1, is preserved in the prandial condition in humans. Endogenous GLP-1 contributes to the impaired post-meal glucose counterregulatory response to hypoglycemia in non-diabetic subjects after bariatric surgery.
At any one time, over 900 million people globally experience a mental disorder (including alcohol/other drug use disorders, Whiteford et al., 2013), and this is increasing by about 3% each year (ABS, 2018). Adding to these challenges, the COVID-19 pandemic presents clear risks for a substantial decline in global mental health. Preliminary evidence points towards an overall rise in symptoms of anxiety and coping responses to stress (Holmes et al., 2020), including increased drug and alcohol use amongst the general population. The greatest mental health impacts of the COVID-19 pandemic will be felt, however, by those who are already most marginalised and people with pre-existing mental health and substance use disorders, who have a higher susceptibility to stress than the general population (Yao et al., 2020). eCliPSE is an online clinical portal developed by CI Professor Frances Kay-Lambkin in partnership with the research team and the NSW Ministry of Health to facilitate access to evidence-based ehealth treatments for mental health and alcohol/other drug [AOD] use problems. However, since the testing of eCliPSE in 2017, uptake of this tool via clinician referral has been low, and no clear models existed for digital treatment integration into health services (Batterham et al., 2015). There are very few examples in the available literature of successful implementation of digital interventions in clinical services, and many failures (Mohr et al., 2017). In response to this, our team has developed an evidence-informed Integrated Translation and Engagement Model (ITEM) to drive the uptake of digital therapeutics into mental health and alcohol/other drug services across NSW. Based on the latest evidence for effective implementation, a consideration of individual, social, environmental, and structural factors, the ITEM synthesises diverse theoretical approaches into a coherent, integrated model. The pandemic has highlighted (and exacerbated) social inequities in relation to the prevalence of mental illness, as well as treatment options. Technology has the potential to respond to this challenge, but Australia lags behind the rest of the world in implementing sustainable, effective digital tools into health service delivery. Additionally, no tool currently exists for the evaluation of dual diagnosis capability of digital programs.
Gut microbiome (MB) has been widely shown to affect human health. Since MB in turn can be altered by various exposures, such as diet and medications, it holds immense potential for future treatments and healthy ageing. On the one hand, faecal microbiota transplantation and Mendelian Randomization have proven a causal link between treatment, MB and diseases. On the other hand, assessing the causality of the MB effects on health has remained challenging, since randomised trials in human subjects are often unethical or difficult to pursue, and Mendelian Randomization lacks valid instruments. Thus, novel analytical approaches are needed for inferring causal associations. To overcome these barriers, we propose a novel framework of antibiotic instrumental variable regression (AB-IVR) for estimating the causal relationships between MB and various diseases. Our inspiration originates from the popular Mendelian Randomization method that uses genetic mutations as instruments in the instrumental variable analysis (IVR). Further, we rely on the recently shown results that antibiotic (AB) treatment has a cumulative long-term effect on MB, consequently pseudo-randomizing individuals with higher AB usage to have more perturbed MB. Thus, we developed a new AB-IVR framework to utilise the long-term AB usage as an instrument in the IVR for assessing the causal effect of MB on health. We pursued a plethora of sensitivity analyses to explore the properties of our method: varying the sample's age group and maximum number of AB used; using a buffer-time for incident disease outcomes to account for feedback-mechanism; using subgroups of AB as instrument; and simulating data for disease outcomes. We detected several interesting causal effects of MB on health outcomes; some causal effects - such as MB effects on migraine, depression, irritable bowel syndrome, and several more - remain significant irrespective of the sensitivity analysis used. We believe that our AB-IVR framework has promising potential to be the new widely used method for assessing MB effect on health.
Different neurodevelopmental conditions such as autism and ADHD frequently co-occur. Overlapping traits and shared genetic liability are potential explanations. We examine this using data from the population-based Norwegian Mother, Father, and Child Cohort study (MoBa), leveraging item-level data to explore the phenotypic factor structure and genetic architecture underlying neurodevelopmental traits at age 3 years (N = 41 708 - 58 630). We identified 11 latent factors at the phenotypic level using maternal reports on 76 items assessing children's motor skills, language, social functioning, communication, attention, activity regulation, and flexibility of behaviors and interests. These factors showed associations with diagnoses of neurodevelopmental conditions and most shared genetic liabilities with autism, ADHD, and/or schizophrenia. Item-level GWAS revealed trait-specific genetic correlations with autism (item rg range = -0.27 - 0.78), ADHD (item rg range = -0.40 - 1), and/or schizophrenia (item rg range = -0.24 - 0.34). Based on patterns of item-level genetic covariance and genomic factor analyses, we find little evidence of common genetic liability across all neurodevelopmental traits. These results more so support genetic factors across more specific areas of neurodevelopment, some of which, such as prosocial behavior overlap with factors found in the phenotypic analyses. Other areas such as motor development seemed to have more heterogenous etiology, with indicators in this domain showing a less consistent pattern of genetic correlations with each other. Overall, these exploratory findings emphasize the etiological complexity of neurodevelopmental traits at this early age. In particular, diverse associations with neurodevelopmental conditions and genetic heterogeneity could inform follow-up work to identify shared and differentiating factors in the early manifestations of neurodevelopmental traits, which in turn could have implications for clinical screening tools and programs.
Motivation: As the availability of larger and more ethnically diverse reference panels grows, there is an increase in demand for ancestry-informed imputation of genome-wide association studies (GWAS), and other downstream analyses, e.g., fine-mapping. Performing such analyses at the genotype level is computationally challenging and necessitates access to individual-level genotype and phenotype data. Summary-statistics-based tools, not requiring individual-level data, provide an efficient alternative that streamlines computational requirements and promotes open science by simplifying the re-analysis and downstream analysis of existing GWAS summary data. However, existing tools perform only disparate parts of needed analysis, have only command-line interfaces and are difficult to extend/link by applied researchers. Results: To address these challenges, we present GAUSS, a comprehensive and user-friendly R package designed to facilitate the re-analysis/downstream analysis of GWAS summary statistics. GAUSS offers an integrated toolkit for a range of functionalities, including i) estimating ancestry proportion of study cohorts, ii) calculating ancestry-informed linkage disequilibrium, iii) imputing summary statistics of unobserved variants, iv) conducting transcriptome-wide association studies, and v) correcting for Winner's Curse biases. Notably, GAUSS utilizes an expansive, multi-ethnic reference panel consisting of 32,953 genomes from 29 ethnic groups. This panel enhances the range and accuracy of imputable variants, including the ability to impute summary statistics of rarer variants. As a result, GAUSS elevates the quality and applicability of existing GWAS analyses without requiring access to subject-level genotypic and phenotypic information. Availability and implementation: The GAUSS R package, complete with its source code, is readily accessible to the public via our GitHub repository at https://github.com/statsleelab/gauss. To further assist users, we provided illustrative use-case scenarios that are conveniently found at https://statsleelab.github.io/gauss/.
Background: The 50-item Expanded Prostate Cancer Index Composite (EPIC) and the International Prostate Symptom Score (IPSS) are two widely used options to assess prostate-related quality of life (QoL), but there is no method to convert between the two. We, therefore, developed and externally validated models for this purpose. Methods: 347 consecutive patients who had previously received radiotherapy and surgery for prostate cancer at two institutions in Switzerland and Germany were contacted via mail and instructed to complete both questionnaires. The Swiss cohort was used to train and internally validate different machine learning models using 4-fold cross-validation. The German cohort was used for external validation. Results: Converting between the EPIC Urinary Irritative/Obstructive subscale and the IPSS using linear regressions resulted in mean absolute errors (MAEs) of 3.88 and 6.12 below the respective previously published minimal important differences (MIDs) of 5.2 and 10 points. Converting between the EPIC Urinary Summary and the IPSS was less accurate with MAEs of 5.13 and 10.45, similar to the MIDs. More complex model architectures did not result in improved performance. Conclusions: Linear regressions can be used to convert between the IPSS and the EPIC Urinary subscales. While the equations obtained in this study can be used to compare results across clinical trials, they should not be used to inform clinical decision-making in individual patients.