Sort by
Refine Your Search
-
Listed
-
Category
-
Field
-
Intern) will support the Department of Obstetrics & Gynecology in developing technology‑enabled solutions that improve research, education, and operational workflows. This student role provides hands
-
human patients. The applicant should have strong background in neuroscience and/or cell biology, biomedical engineering or computer science. The applicant should be able to perform small animal surgeries
-
directions, as well as close mentorship tailored to career goals. Additionally, this position offers access to state-of-the-art facilities and core resources (e.g., imaging, flow cytometry, proteomics), and
-
administration feedback. Effectively contributes to and supports an environment that enhances the positive self-image of individuals served and preserves their human dignity as observed by supervisor. Maintains
-
that enhances the positive self-image of individuals served and their families and preserves their human dignity, right to fair and equitable treatment, self-determination, individuality, privacy and civil rights
-
Engineering, Hematology, Biophysics, or a related field. · Prior experience in blood biomechanics, thrombosis, or coagulation research is highly desirable. · Proficiency in imaging, clot assays, and
-
Engineering, Materials Science & Engineering, and Industrial & Systems Engineering with faculty expertise in cell & tissue engineering, neuro-prosthetics, neural imaging, signal processing, neural networks & AI
-
the nursing process to meet a variety of health care needs with ambulatory care as a primary focus. Works with a variety of health care professionals and security officers in a correctional environment
-
established policies and procedures as observed by supervisor. Effectively contributes to and supports an environment that enhances the positive self-image of individuals served and their families and preserves
-
Science, Electrical/Computer Engineering, or a related field by the start date, with a strong publication record in computer vision, multimodal learning, or vision–language models. We require hands-on expertise with