Clicking Apply Now takes you to AutoApply where you can tailor your resume and apply.
Experience Level
Experience
About the job
Job Responsibilities
Stay abreast of cutting-edge AI research trends, including LLM, Agentic AI, Diffusion, and Inference acceleration, and explore integration opportunities with our proprietary NPU.
Lead groundbreaking AI research aimed at publication in top-tier academic conferences.
Forge research collaborations with prestigious global institutions and manage associated projects.
Job ResponsibilitiesStay abreast of cutting-edge AI research trends, including LLM, Agentic AI, Diffusion, and Inference acceleration, and explore integration opportunities with our proprietary NPU.Lead groundbreaking AI research aimed at publication in top-tier academic conferences.Forge research collaborations with prestigious global institutions and manag…
Full-time|On-site|Yongin-si, Gyeonggi-do, South Korea
Note: This position is available exclusively for individuals wishing to apply as 'Professional Research Personnel' under military service exception; those without military obligations are not eligible to apply.Upstage AI is a dynamic company dedicated to leveraging AI technology to address complex business challenges. Driven by our vision of 'Making AI Beneficial' and our mission to develop 'Artificial General Intelligence (AGI) for Work', we are focused on creating AI solutions that go beyond mere automation. Our efforts aim to revolutionize productivity through enhanced decision support and cost reduction.To realize this vision, Upstage AI continuously advances its core technology in Large Language Models (LLM). We rigorously assess and enhance model performance through benchmark metrics tracked by Global Frontier, while simultaneously building a Workspace Benchmark Set that reflects the actual needs of our clients to maximize model practicality and performance. Our commitment is to solve intricate issues in the industry while leading global technological standards.The LLM Post-training team focuses on key objectives: (1) enhancing knowledge and reasoning capabilities, (2) aligning with human preferences, and (3) improving agentic tool utilization performance. By employing scalable data construction methodologies, high-quality data filtering systems, and state-of-the-art learning techniques such as DPO, RLHF, and RLVR, we spearhead the development of world-class post-training technologies. Joining this team means you will be at the forefront of evolving LLM technology and contributing to next-generation technological innovations that address industry challenges.Representative Projects:Reinforcement learning applications for LLM (mathematics, coding, general reasoning, tool utilization)Efficient and effective reasoning strategiesA scalable pipeline for agentic tool use data synthesisLanguage-specific reward modelsPrecision in instruction followingAs project directions evolve with technological advancements, we prioritize the most impactful elements in LLM model development at any given time.Employment Type:Full-time (for new entrants and transfers of Professional Research Personnel)Work Locations:Gwanggyo Office (10-minute walk from Sanghyeon Station - ending use by March 2026)Gangnam Office (7-minute walk from Gangnam Station - before March 2026)Recruitment Process (Entirely Online):Document screeningAlgorithm coding testDeep learning coding testTechnical interview (1st round)Technical interview (2nd round)Culture interviewFinal interviewAnnouncement of final results
Full-time|On-site|Yongin-si, Gyeonggi-do, South Korea
Note: This position is exclusively for those eligible for military service exemption as 'Professional Research Personnel'. Candidates without military obligations are not eligible to apply.At Upstage AI, we are driven by our vision of 'Making AI Beneficial' and our mission of 'Building Intelligence for the Future of Work'. We are developing next-generation AI solutions based on Vision-Language Models (VLM) that go beyond simply reading text to comprehensively understanding visual information alongside textual data, including images, charts, and tables. Our solutions empower clients to extract dormant insights from their vast document datasets, creating new opportunities for added value.Our VLM team is engaged in research and development of web-scale data collection and synthesis, large-scale pre-training/post-training, and various evaluation methodologies. We aim to provide user-friendly AI solutions that enable everyone to utilize AI technology effortlessly. With our advanced OCR capabilities and key-value extraction technologies, we have recently launched a Document Parsing model that analyzes various document layouts. Through these innovations, we strive to maximize work efficiency and productivity for businesses, ensuring that AI delivers tangible value in practical applications.Our commitment to making AI beneficial is highlighted by our Private LLM services, which optimize LLM technology for business environments to enhance operational efficiency. We are dedicated to launching a series of APIs that make world-class AI models easily accessible across various sectors, thereby supporting our corporate clients' success. Among these, Upstage Document AI stands out for its exceptional OCR and information extraction capabilities, aiming to automate and streamline cumbersome document processing through AI.We are looking for passionate new members to join us on this exciting and challenging journey. If you have a zeal for leading technology in the multimodal AI field and aspire to connect research with real-world services through an end-to-end AI experience, you will find a perfect fit in the Upstage VLM team.Key ResponsibilitiesDesign and build data collection pipelinesThis includes the collection and filtering of multimodal data (document images, field photos, charts, etc.)Research and apply preprocessing and improvement techniques to enhance data qualityModel trainingResearch and implement pretraining and post-training techniques for large-scale vision encoders and vision-language modelsDevelop and apply data and training strategies for various vision-language tasksResearch model structure improvements and optimization techniques considering training and inference efficiencyEvaluationInvestigate and apply various evaluation techniques to assess document-centric VLM model performanceDevelop and introduce new evaluation methods that align with real-world usage scenariosDesign and implement internal benchmarking tools for continuous improvement and scalabilityAdditional tasksShare research outcomes as top-tier international conference papers or open-source codeLead preliminary research for reproducing recent papers and share techniques within the team
About UsAt TwelveLabs, we are at the forefront of creating innovative multimodal foundation models that can interpret videos with human-like understanding. Our groundbreaking models elevate the standards in video-language modeling, enabling intuitive interactions and comprehensive analyses of diverse media formats.With over $110 million secured in Seed and Series A funding, we are supported by leading venture capital firms such as NVIDIA’s NVentures, NEA, Radical Ventures, and Index Ventures, along with esteemed AI pioneers like Fei-Fei Li, Silvio Savarese, and Alexandr Wang. Our headquarters are based in San Francisco, with a significant presence in Seoul, reflecting our dedication to global innovation.Our collaboration with NVIDIA and AWS grants us access to cutting-edge chips, including B300s, empowering us to expand the possibilities in video AI.We cherish the diverse journeys of our team members, believing that our varied cultural, educational, and life experiences are key to challenging the status quo. We seek passionate individuals eager to impact the technology landscape and help us redefine video understanding and multimodal AI.Team OverviewThe Embedding & Search team is integral to TwelveLabs' video understanding initiatives. We craft unified embedding spaces that encompass video, audio, text, and other modalities, and develop retrieval systems designed to deliver precise results that align with user intent across extensive video catalogs.Our research encompasses a wide array of challenges including multimodal representation learning via contrastive and probabilistic methods, temporal video understanding—focusing on hierarchical segmentation and boundary detection—neural ranking architectures for multi-stage retrieval, and modeling user behavior to gain insights into how users search for and engage with video content. We prioritize both algorithmic advancements that set new benchmarks and human-centric insights that enhance the utility of our systems.Our research team benefits from access to state-of-the-art chips like NVIDIA B300s, which accelerate our research-to-production transitions.
Join Upstage AI, a pioneering company dedicated to solving complex business challenges through cutting-edge AI technology. Our vision, 'Making AI Beneficial,' and our mission to develop 'Artificial General Intelligence (AGI) for Work' drive our commitment to enhance productivity through innovative AI solutions that support intricate decision-making and cost reduction.To realize our vision, we continuously advance Large Language Models (LLMs), the cornerstone of AGI technology. By leveraging benchmark metrics tracked by Global Frontier, we assess and enhance model performance while building a Workspace Benchmark Set that reflects client needs, maximizing both practicality and effectiveness. Through these efforts, Upstage AI aims to address intricate industry challenges while leading global technological standards.The LLM Post-training team focuses on enhancing (1) knowledge/reasoning capabilities, (2) human preference alignment, and (3) agentic tool utilization. Utilizing scalable data construction methodologies, high-quality data filtering systems, and the latest learning techniques such as DPO, RLHF, and RLVR, this team is committed to developing world-class post-training technologies. By joining us, you'll be at the forefront of evolving LLM technologies, contributing to next-generation technological innovations that solve real-world industry problems.Key Projects:Reinforcement learning for LLM (mathematics, coding, general reasoning, tool use)Efficient & effective reasoningA scalable agentic tool use data synthesis pipelineLanguage-specific reward modelsPrecise instruction followingProjects are dynamic and evolve based on technological advancements and the most impactful elements in LLM model development.Internship Details:Duration: 3 to 6 monthsRecruitment Process:Application ScreeningAlgorithm Coding TestDeep Learning Coding TestTechnical Interview (Round 1)Technical Interview (Round 2)Culture InterviewFinal Results Announcement* The process may be adjusted based on circumstances.Work Environment:Work remotely anywhere on Earth!We support costs for beverages when using cafes for work, study rooms, or co-working spaces.We provide financial assistance for work-related software, books, educational materials, and other resources necessary for growth.
Part-time|$35/hr - $35/hr|Remote|Remote — South Korea
Please submit your CV in English, including your English proficiency level.Mindrift bridges experts with project-based AI roles for prominent tech firms, concentrating on the evaluation and enhancement of AI systems. Engagements are project-based; this is not permanent employment.Role OverviewEach project presents distinct challenges, and contributors may be tasked with:Crafting innovative computational physics problems that replicate authentic research workflows;Designing Python-based solutions utilizing libraries such as Numpy, SciPy, and Sympy;Ensuring tasks are computationally demanding and cannot be solved manually in a reasonable timeframe (days/weeks);Creating problems requiring complex reasoning across mechanics, electromagnetism, thermodynamics, and quantum mechanics;Formulating problems grounded in genuine research challenges or practical physics applications;Validating solutions through Python utilizing standard physics simulation libraries;Documenting problem statements comprehensively and providing verified correct answers.Desired QualificationsThis position suits physicists experienced in Python who are open to part-time, non-permanent projects. Ideal candidates will possess:A degree in Physics (Theoretical, Experimental, or Computational) or related disciplines;Proficiency in Python for numerical validation. Familiarity with MATLAB, R, C, SQL, Numpy, Pandas, SciPy, or equivalent programming languages is acceptable;At least 2 years of relevant professional experience in applied research or teaching;Experience with numerical simulation techniques;Capability to design problems reflecting genuine physics research workflows;Creativity in problem formulation across various physics disciplines;Understanding of physics modeling and approximation methods;Strong command of written English (C1+).How the Process WorksApply → Meet qualifications → Join a project → Complete tasks → Receive paymentTime CommitmentFor this project, tasks are estimated to require approximately 10–20 hours per week during active phases, contingent on project needs. This is an estimate and not a guaranteed workload, applicable only while the project is active.CompensationContributors can earn up to $35 per hour, depending on their expertise and contribution pace. Compensation varies across projects based on scope, complexity, and expertise required. Note that other projects on the platform may present different earning levels based on their specific requirements.
업스테이지는 "Making AI Beneficial" 비전과 "Building intelligence for the future of work"라는 미션 아래, 단순히 글자와 문장만 읽는 수준을 넘어 사진·차트·표 등 시각 정보를 함께 파악하고 텍스트와 종합해 이해하는 Vision-Language-Model (VLM) 기반의 차세대 AI 솔루션을 만들고 있습니다. 이는 고객이 보유한 방대한 문서 데이터 속에서 잠들어 있던 정보를 추출하여, 새로운 인사이트와 부가가치를 실현할 수 있는 기회를 제공합니다. 이를 위해 업스테이지의 VLM 팀은 웹스케일의 데이터 수집과 합성, 대규모의 사전/사후 학습, 그리고 다양한 평가 방법에 대한 연구 개발을 진행하고 있습니다.업스테이지는 누구나 손쉽게 AI 기술을 활용할 수 있도록 ‘사용이 쉬운 AI 솔루션’을 제공하는 것을 목표로 하고 있습니다. 이미 최고 수준의 OCR 기술과 문서 내 의미 있는 정보를 자동으로 추출하는 Key-Value 추출 기술을 보유하고 있으며, 최근에는 다양한 문서 레이아웃을 분석하는 Document Parsing 모델을 공개하였습니다. 이러한 기술들을 바탕으로, 업스테이지는 기업들의 업무 효율과 생산성을 극대화하는 맞춤형 AI 솔루션을 제공하여 AI가 실제 비즈니스에서 큰 가치를 창출할 수 있도록 노력하고 있습니다.또한 LLM 기술을 비즈니스 환경에 맞게 최적화해 기업들의 업무 효율과 생산성을 높일 수 있는 Private LLM 서비스를 제공하는 등 AI가 세상에 이롭게 쓰이게 하기 위해 세계 최고 수준의 AI 모델을 다양한 분야에서 손쉽게 활용할 수 있는 API 시리즈를 출시하여 기업 고객들의 비즈니스 성공에 기여하고 있습니다. 그 중에서 업스테이지 Document AI는 세계 최고 수준의 OCR 및 정보추출 기술력을 바탕으로 한 제품으로, AI를 통해 번거로운 문서 처리를 자동화하고 효율화하겠다는 목표를 가지고 있습니다.저희는 이 흥미롭고 도전적인 여정을 함께할 새로운 멤버를 찾고 있습니다. 멀티모달 AI 분야에서 기술을 선도하고자 하는 열정을 가지고, 연구에 그치지 않고 실제 서비스까지 연결되는 End-to-End AI 경험을 바탕으로, 협업을 통해 기술을 확장하고 제품화 과정에서 빠르게 성장하길 원하는 분이라면, 업스테이지 VLM 팀에 꼭 맞는 동료가 될 것입니다.
At Upstage, we are dedicated to leveraging AI technology to solve complex business challenges. Our vision of "Making AI Beneficial" and our mission of developing "Artificial General Intelligence (AGI) for Work" guide our operations. We focus on developing AI solutions that go beyond simple task automation to dramatically enhance productivity through sophisticated decision-making support and cost reduction.To realize this vision, Upstage continuously advances the core technologies of AGI, particularly in the realm of Large Language Models (LLM). We enhance our technical competitiveness by diagnosing and improving model performance against benchmark metrics tracked by Global Frontier while developing a Workspace Benchmark Set that reflects our clients' real needs, maximizing the practicality and effectiveness of our models. Our aim is to solve complex industry problems while leading the global standards for technology.The LLM Post-training team focuses on three primary goals: (1) Enhancing knowledge and reasoning capabilities, (2) Aligning with human preferences, and (3) Improving performance in agentic tool use. We utilize scalable data construction methodologies, a high-quality data filtering system, and the latest learning techniques such as DPO, RLHF, and RLVR to spearhead the development of world-class post-training technologies. By joining this team, you will be at the forefront of evolving LLM technology and contribute to next-generation technological innovations that address real-world industry problems.Representative ProjectsReinforcement learning applications for LLM (mathematics, coding, general reasoning, tool utilization)Efficient & effective reasoning methodsDevelopment of a scalable data synthesis pipeline for agentic tool useCreation of language-specific reward modelsPrecision in instruction following** Project focus may change according to technology trends and circumstances, concentrating on the most impactful technological elements for LLM model advancement at any given time.Working ConditionsFull-time position or Internship (Experiential, 3-6 months)Recruitment Process - Fully OnlineDocument screeningAlgorithm coding testDeep learning coding testTechnical interview (1st round)Technical interview (2nd round)Cultural interviewFinal interviewFinal results announcement* The process may be adjusted depending on circumstances.* A reference check may occur after the final interview.Work EnvironmentWork together from anywhere on Earth! We offer the flexibility to work remotely.You can freely choose equipment for remote work within a budget of 5 million KRW.
At Upstage, we are dedicated to solving business challenges through advanced AI technology, driven by our vision of "Making AI Beneficial" and our mission to achieve "Artificial General Intelligence (AGI) for Work". We focus on developing AI solutions that go beyond mere automation, enhancing productivity through complex decision support and cost reduction.To realize this vision, Upstage continuously advances the core technologies of AGI, particularly Large Language Models (LLM). We enhance our technological competitiveness by diagnosing and improving model performance through benchmark metrics tracked by Global Frontier, while also creating Workspace Benchmark Sets that reflect our clients' actual needs, maximizing the practicality and performance of our models. In this way, we strive to solve complex problems in various industries and lead global technology standards.The LLM Evaluation Team is responsible for researching and developing performance evaluation benchmarks and toolkits for LLMs, continuously monitoring benchmark trends that align with Solar's technology strategy. The development of benchmarks encompasses all processes required for performance assessment, including the design of evaluation datasets and metrics. The key objectives of this process are: 1) to extend/create new benchmarks to overcome existing limitations, 2) to develop benchmarks reflecting Korean culture and language characteristics, and 3) to build Work Intelligence benchmarks based on real-world scenarios. Toolkit development will focus on creating a cost/resource-efficient evaluation framework and an environment for analyzing inference results. Joining this team offers the opportunity to evaluate and diagnose frontier models and Solar from multiple perspectives while collaboratively designing a data-driven technology roadmap.Representative ProjectsBenchmark DevelopmentAgent BenchmarkReasoning BenchmarkHuman Alignment BenchmarkSolar Edge-case BenchmarkEvaluation Toolkit DevelopmentDiverse Benchmark Evaluation FrameworkDashboard & Leaderboard of Evaluation Results** Projects may evolve with technological trends, focusing on the most impactful elements for the advancement of LLM models at each stage.Work StructureInternship (Path to Full-time, review for conversion to full-time after 3-6 months)Recruitment Process - Conducted entirely onlineDocument ScreeningAlgorithm Coding TestAssignment TestTechnical Interviews (1st/2nd)Culture InterviewFinal Result Announcement* The process may be adjusted based on circumstances.Work EnvironmentAnywhere On Earth But Together! We embrace the philosophy of working together from anywhere.
Part-time|$35/hr - $35/hr|Remote|Remote — South Korea
To apply, please submit your CV in English and specify your English proficiency level.Mindrift is a pioneering platform that connects talented specialists with exciting project-based AI opportunities offered by leading technology companies. Our primary focus is on the testing, evaluation, and enhancement of AI systems. Please note that participation is project-based and does not involve permanent employment.Opportunity Overview:Each project presents unique challenges, and contributors may engage in the following tasks:Design innovative computational physics problems that emulate authentic physics research workflows;Create programming challenges requiring Python solutions (using libraries such as Numpy, SciPy, and Sympy);Ensure that problems are computationally demanding, requiring days or weeks to solve manually;Develop challenges that necessitate complex reasoning in fields such as mechanics, electromagnetism, thermodynamics, and quantum mechanics;Base problems on real-world research issues or practical applications in physics;Validate solutions using Python with established physics simulation libraries;Clearly document problem statements and provide verified, accurate solutions.Candidate Profile:This position is ideal for quantum researchers with Python experience, who are interested in flexible part-time, non-permanent projects. Preferred qualifications include:A degree in Physics (Theoretical, Experimental, or Computational) or related fields;Proficiency in Python for numerical validation; familiarity with MATLAB, R, C, SQL, Numpy, Pandas, SciPy, or other domain-specific libraries is acceptable;A minimum of 2 years of professional experience in applied research or teaching;Experience with numerical simulation methodologies;Ability to craft problems that reflect real physics research workflows;Creative problem-solving skills across various physics domains;Familiarity with physics modeling and approximation techniques;Strong written communication skills in English (C1 or higher).Application Process:Apply → Meet qualification requirements → Join a project → Complete tasks → Get compensatedProject Commitment:For this project, tasks are projected to require approximately 10-20 hours per week during active phases, depending on project needs. This estimate is not a guaranteed workload and applies only while the project is ongoing.Compensation:Contributors can earn up to $35 per hour, contingent upon experience and contribution pace. Note that compensation may vary across projects depending on their complexity and required expertise.
Join NEOWIZ in revolutionizing the gaming landscape by crafting astonishing games that captivate everyone.Our mission extends beyond publishing competitive IP; we are dedicated to developing games across various genres and platforms to deliver unmatched experiences and surprises to players worldwide.At NEOWIZ, we foster a collaborative environment where diverse ideas are exchanged to discover better solutions. We believe that outstanding performance and teamwork stem from robust information sharing, and we embrace challenges with passion and determination.We invite REAL PERFORMERS to join us in creating REAL GAMES that will change the gaming industry.Department OverviewThe NEOWIZ New Technology Lab focuses on applying proven cutting-edge AI technologies to real game development and live services, prioritizing practicality over academic research.We analyze the latest global papers and open-source technologies weekly, transforming them into robust AI services capable of handling NEOWIZ's significant game traffic and data.We are seeking a Research Specialist eager to experience the thrill of seeing their code directly impact the experiences of millions of gamers and colleagues.
Join Us at Twelve LabsAt Twelve Labs, we are at the forefront of developing revolutionary multimodal foundation models that interpret videos with human-like understanding. Our cutting-edge models have set new benchmarks in video-language modeling, enhancing our capabilities in analyzing and interacting with diverse media forms.Backed by over $110 million in Seed and Series A funding, we are supported by prestigious venture capital firms including NVIDIA’s NVentures, NEA, Radical Ventures, and Index Ventures, along with esteemed AI pioneers such as Fei-Fei Li and Silvio Savarese. While our headquarters is in San Francisco, our significant presence in Seoul highlights our dedication to global innovation.Our strategic partnerships with NVIDIA and AWS provide us with access to top-tier hardware, including the B300s, which empower our advancement in video AI capabilities.We embrace the unique journeys of every individual and believe that our diverse backgrounds drive innovation. We seek passionate individuals who resonate with our mission and are eager to make impactful contributions as we redefine technology and transform the world of video understanding and multimodal AI.About the Video Cognition System TeamOur team is dedicated to creating the first-ever video cognition system capable of processing extensive video libraries into a structured, queryable Video Memory & Cortex for vertical LLM agents.We are addressing fundamental questions surrounding machine cognition, focusing on perception, memory, reasoning, and attention. Our goal is to design innovative memory structures that exceed traditional context windows and build a reasoning cortex for comprehensive video analysis.Our research endeavors cover corpus-level reasoning, knowledge extraction, indexing architectures, and multi-video understanding, requiring a synergistic collaboration between research and engineering to create systems that are both scientifically rigorous and impactful.
About UsAt Twelve Labs, we are on the hunt for talented individuals who are eager to shape the global standard for video understanding AI!We are developing cutting-edge AI models specifically tailored for video content, enabling advanced functionalities such as search, analysis, summarization, and insights generation from vast video datasets.Our technology is utilized by the largest sports leagues to swiftly and accurately identify highlights from extensive game footage, enhancing the personalized viewing experience. In South Korea, integrated control centers leverage our models to effectively analyze CCTV footage for rapid crisis response, while major global broadcasters and studios use our technology to create content for billions of viewers.As a deep tech startup with offices in San Francisco and Seoul, Twelve Labs has been recognized as one of the top 100 AI startups globally by CB Insights for four consecutive years. We have raised over $110 million from renowned venture capital firms and companies such as NVIDIA, NEA, Index Ventures, Databricks, and Snowflake. Our AI model is uniquely offered through Amazon Bedrock in South Korea, and we are committed to creating innovative products alongside exceptional colleagues while growing with our global clientele.At Twelve Labs, we operate based on core values that include:Honesty and introspection towards oneself and the team.Resilience and humility in the face of failure and feedback.A commitment to continuous learning and elevating team capabilities together.If you enjoy solving challenging problems and growing alongside your team, this is the opportunity for you at Twelve Labs!Team OverviewYou will join the team responsible for the research and development of Marengo, our multimodal embedding model. This innovative model integrates various modalities, including video, audio, and text, into a unified embedding space.The team tackles diverse research topics, including contrastive learning, temporal video understanding, and multimodal representation learning. We oversee the entire model development process, from building large-scale learning data pipelines to designing model architectures, optimizing distributed learning, and creating evaluation frameworks. With access to top-tier GPU resources such as the NVIDIA B300, we can rapidly conduct large-scale experiments.In an environment with a short gap between research and production, we closely collaborate with the Search, Product, and Infrastructure teams to continuously enhance the quality of models used by thousands of clients worldwide.Role OverviewAs the ML Research Engineering Manager on the Marengo team, you will lead and develop the research engineering group responsible for Twelve Labs' multimodal embedding models. You will own the technical roadmap for the team and support the growth of its engineers.This is a player-coach role where you will manage a team of ML research engineers focused on model architecture, training infrastructure, and data pipelines while remaining technically engaged enough to make sound architectural decisions and assess research directions. We seek someone who has experience building and deploying production ML systems and is eager to amplify their impact by empowering their team.
Key ResponsibilitiesManage and operate the testing infrastructure for our software and hardware solutions.Conduct operations and maintenance on bare-metal equipment utilized for evaluations.Support the evaluation infrastructure to ensure successful product validation.
Key ResponsibilitiesDevise and execute comprehensive verification strategies for block/IP/SoC, establishing test benches to facilitate effective verification at various levels.Create and implement functional testing protocols based on the established verification test plans.Lead the design verification process to successful completion, adhering to defined metrics for functional and code coverage.Analyze, troubleshoot, and rectify functional discrepancies in the design, working closely with the design team.Engage in collaboration with cross-disciplinary teams, including Design, Modeling, Emulation, and Silicon Validation, to ensure superior design quality.
Key ResponsibilitiesIntegrate and validate IP blocks within the System on Chip (SoC) architecture while assisting in the physical implementation process.Formulate specifications, architecture, and operational scenarios at the SoC level.Comprehend standard interface specifications (e.g., PCIe) and chip operational scenarios to effectively configure integrated IP blocks.Conduct performance analyses through chip-level simulations focusing on bus and memory bandwidth, alongside FPGA prototyping.
Key ResponsibilitiesDesign, develop, and sustain Linux PCIe device drivers and kernel modules to optimize system performance.Enhance PCIe subsystem functionality, focusing on DMA, IOMMU, interrupts, and BAR mapping.Create user-space libraries and APIs that facilitate high-speed data transfers.Collaborate with hardware and firmware teams to construct comprehensive PCIe I/O pipelines.Devise effective memory management techniques and implement zero-copy data transfer mechanisms.
About the RoleWe are seeking a dedicated Security Engineer to report directly to the CTO. This role is instrumental in establishing the company's security framework and implementing a Zero Trust architecture across our organization.You will be responsible for developing security policies, configuring systems, managing accounts, controlling access, ensuring cloud security, and conducting penetration testing.Key ResponsibilitiesEstablish and execute an enterprise security strategy and policies based on Zero Trust principles.Build and operate identity and access management systems (SSO, IAM, RBAC, etc.).Design and configure security architecture for SaaS and cloud infrastructure (AWS, GCP, etc.).Diagnose and enhance security vulnerabilities within internal systems and services.Collaborate with development and infrastructure teams to integrate security practices (code reviews, CI/CD security, etc.).Set up and manage incident response systems and monitoring environments.
Full-time|On-site|Yongin-si, Gyeonggi-do, South Korea
At Upstage AI, we specialize in leveraging AI technology to solve complex business challenges. Our vision, "Making AI Beneficial," and our mission, "Building Intelligence for the Future of Work," drive us to deliver AI products and solutions that enhance productivity through advanced decision-making support and cost reduction. We empower businesses to innovate their processes for more effective growth.Our goal is to democratize AI technology by offering user-friendly AI solutions. We possess cutting-edge OCR technology and Key-Value extraction capabilities for meaningful information retrieval from documents. Recently, we have unveiled a Document Parsing model that analyzes various document layouts. With these advancements, Upstage AI provides tailored AI solutions that maximize operational efficiency and productivity, ensuring AI delivers significant value in real-world business contexts.We also offer Private LLM services optimized for business environments, enabling companies to enhance operational efficiency. Our API series enables easy access to world-class AI models across various sectors, contributing to our clients' business success. Among our offerings, Upstage Document AI stands out with its superior OCR and information extraction capabilities, aimed at automating and streamlining tedious document processes.The AI DevOps team is a core engineering unit focused on designing, building, developing, and operating Upstage's AI products and services to create value for our customers. We are responsible for automating the deployment of AI-based solutions and establishing a stable operational framework, designing delivery pipelines tailored to client environments and developing automation tools and operational standards for efficient deployment and maintenance.Additionally, we foster trust-based technology partnerships by understanding use cases across various industries and resolving client-specific challenges on-site.The AI DevOps team thrives on a culture of continuous improvement and innovation. We apply the latest technologies to solve practical use case problems across different industries, valuing the experience of rapid growth. For instance, to ensure the stable operation of numerous client systems, we develop internal solutions such as deployment automation tools, inspection management tools, and monitoring tools, collaborating with product teams as needed to contribute to product-level enhancements.DevOps is not just an operational unit; it is an engineering organization that technically realizes client business growth, serving as a bridge that connects development, operations, and customer experience. Here, you will find the opportunity to identify essential development and design elements in the ever-evolving AI industry and directly contribute to our clients' business success.
Part-time|$37/hr - $37/hr|Remote|Remote — South Korea
We invite you to submit your CV in English, highlighting your English proficiency level.At Mindrift, we connect talented professionals with exciting project-based AI opportunities for top technology firms, focusing on the assessment, evaluation, and enhancement of AI systems. Please note that participation is project-based and not considered permanent employment.Opportunity OverviewEach project presents distinct tasks; however, contributors might engage in the following activities:Designing innovative computational engineering problems that reflect authentic engineering workflows;Creating problems that necessitate Python programming for engineering calculations and simulations;Ensuring problems are computationally demanding and involve numerical methods or iterative solutions;Developing problems that include system design, optimization, and analysis;Base problems on real-world research challenges or practical engineering applications;Validating solutions using Python with established engineering libraries;Clearly documenting problem statements while providing verified correct answers.Ideal Candidate ProfileThis role is suited for engineers with Python expertise who are open to part-time, non-permanent project work. The ideal candidate will possess:A degree in Electrical Engineering or a related discipline;Proficiency in Python for numerical validation; familiarity with MATLAB, R, C, SQL, Numpy, Pandas, SciPy, or any domain-specific programming language is also acceptable;A minimum of 2 years of professional experience in applied, research, or teaching environments;An understanding of practical engineering constraints and approximations;Strong written English communication skills (C1+ level);Professional certifications (e.g., CMME, SAS Certifications, CAP) and experience in international or applied projects are advantageous.Application ProcessSteps include: Apply → Pass qualifications → Join a project → Complete tasks → Receive payment.Project Time CommitmentFor this project, expected task commitments are around 10–20 hours per week during active project phases, depending on specific project requirements. This is an estimate and not a guaranteed workload.Compensation DetailsContributors can earn up to $37 per hour, contingent upon their level of expertise and contribution pace. Compensation varies by project based on scope, complexity, and required skills. Note that other platform projects may present different earning levels based on their specific requirements.
Job ResponsibilitiesStay abreast of cutting-edge AI research trends, including LLM, Agentic AI, Diffusion, and Inference acceleration, and explore integration opportunities with our proprietary NPU.Lead groundbreaking AI research aimed at publication in top-tier academic conferences.Forge research collaborations with prestigious global institutions and manag…
Full-time|On-site|Yongin-si, Gyeonggi-do, South Korea
Note: This position is available exclusively for individuals wishing to apply as 'Professional Research Personnel' under military service exception; those without military obligations are not eligible to apply.Upstage AI is a dynamic company dedicated to leveraging AI technology to address complex business challenges. Driven by our vision of 'Making AI Beneficial' and our mission to develop 'Artificial General Intelligence (AGI) for Work', we are focused on creating AI solutions that go beyond mere automation. Our efforts aim to revolutionize productivity through enhanced decision support and cost reduction.To realize this vision, Upstage AI continuously advances its core technology in Large Language Models (LLM). We rigorously assess and enhance model performance through benchmark metrics tracked by Global Frontier, while simultaneously building a Workspace Benchmark Set that reflects the actual needs of our clients to maximize model practicality and performance. Our commitment is to solve intricate issues in the industry while leading global technological standards.The LLM Post-training team focuses on key objectives: (1) enhancing knowledge and reasoning capabilities, (2) aligning with human preferences, and (3) improving agentic tool utilization performance. By employing scalable data construction methodologies, high-quality data filtering systems, and state-of-the-art learning techniques such as DPO, RLHF, and RLVR, we spearhead the development of world-class post-training technologies. Joining this team means you will be at the forefront of evolving LLM technology and contributing to next-generation technological innovations that address industry challenges.Representative Projects:Reinforcement learning applications for LLM (mathematics, coding, general reasoning, tool utilization)Efficient and effective reasoning strategiesA scalable pipeline for agentic tool use data synthesisLanguage-specific reward modelsPrecision in instruction followingAs project directions evolve with technological advancements, we prioritize the most impactful elements in LLM model development at any given time.Employment Type:Full-time (for new entrants and transfers of Professional Research Personnel)Work Locations:Gwanggyo Office (10-minute walk from Sanghyeon Station - ending use by March 2026)Gangnam Office (7-minute walk from Gangnam Station - before March 2026)Recruitment Process (Entirely Online):Document screeningAlgorithm coding testDeep learning coding testTechnical interview (1st round)Technical interview (2nd round)Culture interviewFinal interviewAnnouncement of final results
Full-time|On-site|Yongin-si, Gyeonggi-do, South Korea
Note: This position is exclusively for those eligible for military service exemption as 'Professional Research Personnel'. Candidates without military obligations are not eligible to apply.At Upstage AI, we are driven by our vision of 'Making AI Beneficial' and our mission of 'Building Intelligence for the Future of Work'. We are developing next-generation AI solutions based on Vision-Language Models (VLM) that go beyond simply reading text to comprehensively understanding visual information alongside textual data, including images, charts, and tables. Our solutions empower clients to extract dormant insights from their vast document datasets, creating new opportunities for added value.Our VLM team is engaged in research and development of web-scale data collection and synthesis, large-scale pre-training/post-training, and various evaluation methodologies. We aim to provide user-friendly AI solutions that enable everyone to utilize AI technology effortlessly. With our advanced OCR capabilities and key-value extraction technologies, we have recently launched a Document Parsing model that analyzes various document layouts. Through these innovations, we strive to maximize work efficiency and productivity for businesses, ensuring that AI delivers tangible value in practical applications.Our commitment to making AI beneficial is highlighted by our Private LLM services, which optimize LLM technology for business environments to enhance operational efficiency. We are dedicated to launching a series of APIs that make world-class AI models easily accessible across various sectors, thereby supporting our corporate clients' success. Among these, Upstage Document AI stands out for its exceptional OCR and information extraction capabilities, aiming to automate and streamline cumbersome document processing through AI.We are looking for passionate new members to join us on this exciting and challenging journey. If you have a zeal for leading technology in the multimodal AI field and aspire to connect research with real-world services through an end-to-end AI experience, you will find a perfect fit in the Upstage VLM team.Key ResponsibilitiesDesign and build data collection pipelinesThis includes the collection and filtering of multimodal data (document images, field photos, charts, etc.)Research and apply preprocessing and improvement techniques to enhance data qualityModel trainingResearch and implement pretraining and post-training techniques for large-scale vision encoders and vision-language modelsDevelop and apply data and training strategies for various vision-language tasksResearch model structure improvements and optimization techniques considering training and inference efficiencyEvaluationInvestigate and apply various evaluation techniques to assess document-centric VLM model performanceDevelop and introduce new evaluation methods that align with real-world usage scenariosDesign and implement internal benchmarking tools for continuous improvement and scalabilityAdditional tasksShare research outcomes as top-tier international conference papers or open-source codeLead preliminary research for reproducing recent papers and share techniques within the team
About UsAt TwelveLabs, we are at the forefront of creating innovative multimodal foundation models that can interpret videos with human-like understanding. Our groundbreaking models elevate the standards in video-language modeling, enabling intuitive interactions and comprehensive analyses of diverse media formats.With over $110 million secured in Seed and Series A funding, we are supported by leading venture capital firms such as NVIDIA’s NVentures, NEA, Radical Ventures, and Index Ventures, along with esteemed AI pioneers like Fei-Fei Li, Silvio Savarese, and Alexandr Wang. Our headquarters are based in San Francisco, with a significant presence in Seoul, reflecting our dedication to global innovation.Our collaboration with NVIDIA and AWS grants us access to cutting-edge chips, including B300s, empowering us to expand the possibilities in video AI.We cherish the diverse journeys of our team members, believing that our varied cultural, educational, and life experiences are key to challenging the status quo. We seek passionate individuals eager to impact the technology landscape and help us redefine video understanding and multimodal AI.Team OverviewThe Embedding & Search team is integral to TwelveLabs' video understanding initiatives. We craft unified embedding spaces that encompass video, audio, text, and other modalities, and develop retrieval systems designed to deliver precise results that align with user intent across extensive video catalogs.Our research encompasses a wide array of challenges including multimodal representation learning via contrastive and probabilistic methods, temporal video understanding—focusing on hierarchical segmentation and boundary detection—neural ranking architectures for multi-stage retrieval, and modeling user behavior to gain insights into how users search for and engage with video content. We prioritize both algorithmic advancements that set new benchmarks and human-centric insights that enhance the utility of our systems.Our research team benefits from access to state-of-the-art chips like NVIDIA B300s, which accelerate our research-to-production transitions.
Join Upstage AI, a pioneering company dedicated to solving complex business challenges through cutting-edge AI technology. Our vision, 'Making AI Beneficial,' and our mission to develop 'Artificial General Intelligence (AGI) for Work' drive our commitment to enhance productivity through innovative AI solutions that support intricate decision-making and cost reduction.To realize our vision, we continuously advance Large Language Models (LLMs), the cornerstone of AGI technology. By leveraging benchmark metrics tracked by Global Frontier, we assess and enhance model performance while building a Workspace Benchmark Set that reflects client needs, maximizing both practicality and effectiveness. Through these efforts, Upstage AI aims to address intricate industry challenges while leading global technological standards.The LLM Post-training team focuses on enhancing (1) knowledge/reasoning capabilities, (2) human preference alignment, and (3) agentic tool utilization. Utilizing scalable data construction methodologies, high-quality data filtering systems, and the latest learning techniques such as DPO, RLHF, and RLVR, this team is committed to developing world-class post-training technologies. By joining us, you'll be at the forefront of evolving LLM technologies, contributing to next-generation technological innovations that solve real-world industry problems.Key Projects:Reinforcement learning for LLM (mathematics, coding, general reasoning, tool use)Efficient & effective reasoningA scalable agentic tool use data synthesis pipelineLanguage-specific reward modelsPrecise instruction followingProjects are dynamic and evolve based on technological advancements and the most impactful elements in LLM model development.Internship Details:Duration: 3 to 6 monthsRecruitment Process:Application ScreeningAlgorithm Coding TestDeep Learning Coding TestTechnical Interview (Round 1)Technical Interview (Round 2)Culture InterviewFinal Results Announcement* The process may be adjusted based on circumstances.Work Environment:Work remotely anywhere on Earth!We support costs for beverages when using cafes for work, study rooms, or co-working spaces.We provide financial assistance for work-related software, books, educational materials, and other resources necessary for growth.
Part-time|$35/hr - $35/hr|Remote|Remote — South Korea
Please submit your CV in English, including your English proficiency level.Mindrift bridges experts with project-based AI roles for prominent tech firms, concentrating on the evaluation and enhancement of AI systems. Engagements are project-based; this is not permanent employment.Role OverviewEach project presents distinct challenges, and contributors may be tasked with:Crafting innovative computational physics problems that replicate authentic research workflows;Designing Python-based solutions utilizing libraries such as Numpy, SciPy, and Sympy;Ensuring tasks are computationally demanding and cannot be solved manually in a reasonable timeframe (days/weeks);Creating problems requiring complex reasoning across mechanics, electromagnetism, thermodynamics, and quantum mechanics;Formulating problems grounded in genuine research challenges or practical physics applications;Validating solutions through Python utilizing standard physics simulation libraries;Documenting problem statements comprehensively and providing verified correct answers.Desired QualificationsThis position suits physicists experienced in Python who are open to part-time, non-permanent projects. Ideal candidates will possess:A degree in Physics (Theoretical, Experimental, or Computational) or related disciplines;Proficiency in Python for numerical validation. Familiarity with MATLAB, R, C, SQL, Numpy, Pandas, SciPy, or equivalent programming languages is acceptable;At least 2 years of relevant professional experience in applied research or teaching;Experience with numerical simulation techniques;Capability to design problems reflecting genuine physics research workflows;Creativity in problem formulation across various physics disciplines;Understanding of physics modeling and approximation methods;Strong command of written English (C1+).How the Process WorksApply → Meet qualifications → Join a project → Complete tasks → Receive paymentTime CommitmentFor this project, tasks are estimated to require approximately 10–20 hours per week during active phases, contingent on project needs. This is an estimate and not a guaranteed workload, applicable only while the project is active.CompensationContributors can earn up to $35 per hour, depending on their expertise and contribution pace. Compensation varies across projects based on scope, complexity, and expertise required. Note that other projects on the platform may present different earning levels based on their specific requirements.
업스테이지는 "Making AI Beneficial" 비전과 "Building intelligence for the future of work"라는 미션 아래, 단순히 글자와 문장만 읽는 수준을 넘어 사진·차트·표 등 시각 정보를 함께 파악하고 텍스트와 종합해 이해하는 Vision-Language-Model (VLM) 기반의 차세대 AI 솔루션을 만들고 있습니다. 이는 고객이 보유한 방대한 문서 데이터 속에서 잠들어 있던 정보를 추출하여, 새로운 인사이트와 부가가치를 실현할 수 있는 기회를 제공합니다. 이를 위해 업스테이지의 VLM 팀은 웹스케일의 데이터 수집과 합성, 대규모의 사전/사후 학습, 그리고 다양한 평가 방법에 대한 연구 개발을 진행하고 있습니다.업스테이지는 누구나 손쉽게 AI 기술을 활용할 수 있도록 ‘사용이 쉬운 AI 솔루션’을 제공하는 것을 목표로 하고 있습니다. 이미 최고 수준의 OCR 기술과 문서 내 의미 있는 정보를 자동으로 추출하는 Key-Value 추출 기술을 보유하고 있으며, 최근에는 다양한 문서 레이아웃을 분석하는 Document Parsing 모델을 공개하였습니다. 이러한 기술들을 바탕으로, 업스테이지는 기업들의 업무 효율과 생산성을 극대화하는 맞춤형 AI 솔루션을 제공하여 AI가 실제 비즈니스에서 큰 가치를 창출할 수 있도록 노력하고 있습니다.또한 LLM 기술을 비즈니스 환경에 맞게 최적화해 기업들의 업무 효율과 생산성을 높일 수 있는 Private LLM 서비스를 제공하는 등 AI가 세상에 이롭게 쓰이게 하기 위해 세계 최고 수준의 AI 모델을 다양한 분야에서 손쉽게 활용할 수 있는 API 시리즈를 출시하여 기업 고객들의 비즈니스 성공에 기여하고 있습니다. 그 중에서 업스테이지 Document AI는 세계 최고 수준의 OCR 및 정보추출 기술력을 바탕으로 한 제품으로, AI를 통해 번거로운 문서 처리를 자동화하고 효율화하겠다는 목표를 가지고 있습니다.저희는 이 흥미롭고 도전적인 여정을 함께할 새로운 멤버를 찾고 있습니다. 멀티모달 AI 분야에서 기술을 선도하고자 하는 열정을 가지고, 연구에 그치지 않고 실제 서비스까지 연결되는 End-to-End AI 경험을 바탕으로, 협업을 통해 기술을 확장하고 제품화 과정에서 빠르게 성장하길 원하는 분이라면, 업스테이지 VLM 팀에 꼭 맞는 동료가 될 것입니다.
At Upstage, we are dedicated to leveraging AI technology to solve complex business challenges. Our vision of "Making AI Beneficial" and our mission of developing "Artificial General Intelligence (AGI) for Work" guide our operations. We focus on developing AI solutions that go beyond simple task automation to dramatically enhance productivity through sophisticated decision-making support and cost reduction.To realize this vision, Upstage continuously advances the core technologies of AGI, particularly in the realm of Large Language Models (LLM). We enhance our technical competitiveness by diagnosing and improving model performance against benchmark metrics tracked by Global Frontier while developing a Workspace Benchmark Set that reflects our clients' real needs, maximizing the practicality and effectiveness of our models. Our aim is to solve complex industry problems while leading the global standards for technology.The LLM Post-training team focuses on three primary goals: (1) Enhancing knowledge and reasoning capabilities, (2) Aligning with human preferences, and (3) Improving performance in agentic tool use. We utilize scalable data construction methodologies, a high-quality data filtering system, and the latest learning techniques such as DPO, RLHF, and RLVR to spearhead the development of world-class post-training technologies. By joining this team, you will be at the forefront of evolving LLM technology and contribute to next-generation technological innovations that address real-world industry problems.Representative ProjectsReinforcement learning applications for LLM (mathematics, coding, general reasoning, tool utilization)Efficient & effective reasoning methodsDevelopment of a scalable data synthesis pipeline for agentic tool useCreation of language-specific reward modelsPrecision in instruction following** Project focus may change according to technology trends and circumstances, concentrating on the most impactful technological elements for LLM model advancement at any given time.Working ConditionsFull-time position or Internship (Experiential, 3-6 months)Recruitment Process - Fully OnlineDocument screeningAlgorithm coding testDeep learning coding testTechnical interview (1st round)Technical interview (2nd round)Cultural interviewFinal interviewFinal results announcement* The process may be adjusted depending on circumstances.* A reference check may occur after the final interview.Work EnvironmentWork together from anywhere on Earth! We offer the flexibility to work remotely.You can freely choose equipment for remote work within a budget of 5 million KRW.
At Upstage, we are dedicated to solving business challenges through advanced AI technology, driven by our vision of "Making AI Beneficial" and our mission to achieve "Artificial General Intelligence (AGI) for Work". We focus on developing AI solutions that go beyond mere automation, enhancing productivity through complex decision support and cost reduction.To realize this vision, Upstage continuously advances the core technologies of AGI, particularly Large Language Models (LLM). We enhance our technological competitiveness by diagnosing and improving model performance through benchmark metrics tracked by Global Frontier, while also creating Workspace Benchmark Sets that reflect our clients' actual needs, maximizing the practicality and performance of our models. In this way, we strive to solve complex problems in various industries and lead global technology standards.The LLM Evaluation Team is responsible for researching and developing performance evaluation benchmarks and toolkits for LLMs, continuously monitoring benchmark trends that align with Solar's technology strategy. The development of benchmarks encompasses all processes required for performance assessment, including the design of evaluation datasets and metrics. The key objectives of this process are: 1) to extend/create new benchmarks to overcome existing limitations, 2) to develop benchmarks reflecting Korean culture and language characteristics, and 3) to build Work Intelligence benchmarks based on real-world scenarios. Toolkit development will focus on creating a cost/resource-efficient evaluation framework and an environment for analyzing inference results. Joining this team offers the opportunity to evaluate and diagnose frontier models and Solar from multiple perspectives while collaboratively designing a data-driven technology roadmap.Representative ProjectsBenchmark DevelopmentAgent BenchmarkReasoning BenchmarkHuman Alignment BenchmarkSolar Edge-case BenchmarkEvaluation Toolkit DevelopmentDiverse Benchmark Evaluation FrameworkDashboard & Leaderboard of Evaluation Results** Projects may evolve with technological trends, focusing on the most impactful elements for the advancement of LLM models at each stage.Work StructureInternship (Path to Full-time, review for conversion to full-time after 3-6 months)Recruitment Process - Conducted entirely onlineDocument ScreeningAlgorithm Coding TestAssignment TestTechnical Interviews (1st/2nd)Culture InterviewFinal Result Announcement* The process may be adjusted based on circumstances.Work EnvironmentAnywhere On Earth But Together! We embrace the philosophy of working together from anywhere.
Part-time|$35/hr - $35/hr|Remote|Remote — South Korea
To apply, please submit your CV in English and specify your English proficiency level.Mindrift is a pioneering platform that connects talented specialists with exciting project-based AI opportunities offered by leading technology companies. Our primary focus is on the testing, evaluation, and enhancement of AI systems. Please note that participation is project-based and does not involve permanent employment.Opportunity Overview:Each project presents unique challenges, and contributors may engage in the following tasks:Design innovative computational physics problems that emulate authentic physics research workflows;Create programming challenges requiring Python solutions (using libraries such as Numpy, SciPy, and Sympy);Ensure that problems are computationally demanding, requiring days or weeks to solve manually;Develop challenges that necessitate complex reasoning in fields such as mechanics, electromagnetism, thermodynamics, and quantum mechanics;Base problems on real-world research issues or practical applications in physics;Validate solutions using Python with established physics simulation libraries;Clearly document problem statements and provide verified, accurate solutions.Candidate Profile:This position is ideal for quantum researchers with Python experience, who are interested in flexible part-time, non-permanent projects. Preferred qualifications include:A degree in Physics (Theoretical, Experimental, or Computational) or related fields;Proficiency in Python for numerical validation; familiarity with MATLAB, R, C, SQL, Numpy, Pandas, SciPy, or other domain-specific libraries is acceptable;A minimum of 2 years of professional experience in applied research or teaching;Experience with numerical simulation methodologies;Ability to craft problems that reflect real physics research workflows;Creative problem-solving skills across various physics domains;Familiarity with physics modeling and approximation techniques;Strong written communication skills in English (C1 or higher).Application Process:Apply → Meet qualification requirements → Join a project → Complete tasks → Get compensatedProject Commitment:For this project, tasks are projected to require approximately 10-20 hours per week during active phases, depending on project needs. This estimate is not a guaranteed workload and applies only while the project is ongoing.Compensation:Contributors can earn up to $35 per hour, contingent upon experience and contribution pace. Note that compensation may vary across projects depending on their complexity and required expertise.
Join NEOWIZ in revolutionizing the gaming landscape by crafting astonishing games that captivate everyone.Our mission extends beyond publishing competitive IP; we are dedicated to developing games across various genres and platforms to deliver unmatched experiences and surprises to players worldwide.At NEOWIZ, we foster a collaborative environment where diverse ideas are exchanged to discover better solutions. We believe that outstanding performance and teamwork stem from robust information sharing, and we embrace challenges with passion and determination.We invite REAL PERFORMERS to join us in creating REAL GAMES that will change the gaming industry.Department OverviewThe NEOWIZ New Technology Lab focuses on applying proven cutting-edge AI technologies to real game development and live services, prioritizing practicality over academic research.We analyze the latest global papers and open-source technologies weekly, transforming them into robust AI services capable of handling NEOWIZ's significant game traffic and data.We are seeking a Research Specialist eager to experience the thrill of seeing their code directly impact the experiences of millions of gamers and colleagues.
Join Us at Twelve LabsAt Twelve Labs, we are at the forefront of developing revolutionary multimodal foundation models that interpret videos with human-like understanding. Our cutting-edge models have set new benchmarks in video-language modeling, enhancing our capabilities in analyzing and interacting with diverse media forms.Backed by over $110 million in Seed and Series A funding, we are supported by prestigious venture capital firms including NVIDIA’s NVentures, NEA, Radical Ventures, and Index Ventures, along with esteemed AI pioneers such as Fei-Fei Li and Silvio Savarese. While our headquarters is in San Francisco, our significant presence in Seoul highlights our dedication to global innovation.Our strategic partnerships with NVIDIA and AWS provide us with access to top-tier hardware, including the B300s, which empower our advancement in video AI capabilities.We embrace the unique journeys of every individual and believe that our diverse backgrounds drive innovation. We seek passionate individuals who resonate with our mission and are eager to make impactful contributions as we redefine technology and transform the world of video understanding and multimodal AI.About the Video Cognition System TeamOur team is dedicated to creating the first-ever video cognition system capable of processing extensive video libraries into a structured, queryable Video Memory & Cortex for vertical LLM agents.We are addressing fundamental questions surrounding machine cognition, focusing on perception, memory, reasoning, and attention. Our goal is to design innovative memory structures that exceed traditional context windows and build a reasoning cortex for comprehensive video analysis.Our research endeavors cover corpus-level reasoning, knowledge extraction, indexing architectures, and multi-video understanding, requiring a synergistic collaboration between research and engineering to create systems that are both scientifically rigorous and impactful.
About UsAt Twelve Labs, we are on the hunt for talented individuals who are eager to shape the global standard for video understanding AI!We are developing cutting-edge AI models specifically tailored for video content, enabling advanced functionalities such as search, analysis, summarization, and insights generation from vast video datasets.Our technology is utilized by the largest sports leagues to swiftly and accurately identify highlights from extensive game footage, enhancing the personalized viewing experience. In South Korea, integrated control centers leverage our models to effectively analyze CCTV footage for rapid crisis response, while major global broadcasters and studios use our technology to create content for billions of viewers.As a deep tech startup with offices in San Francisco and Seoul, Twelve Labs has been recognized as one of the top 100 AI startups globally by CB Insights for four consecutive years. We have raised over $110 million from renowned venture capital firms and companies such as NVIDIA, NEA, Index Ventures, Databricks, and Snowflake. Our AI model is uniquely offered through Amazon Bedrock in South Korea, and we are committed to creating innovative products alongside exceptional colleagues while growing with our global clientele.At Twelve Labs, we operate based on core values that include:Honesty and introspection towards oneself and the team.Resilience and humility in the face of failure and feedback.A commitment to continuous learning and elevating team capabilities together.If you enjoy solving challenging problems and growing alongside your team, this is the opportunity for you at Twelve Labs!Team OverviewYou will join the team responsible for the research and development of Marengo, our multimodal embedding model. This innovative model integrates various modalities, including video, audio, and text, into a unified embedding space.The team tackles diverse research topics, including contrastive learning, temporal video understanding, and multimodal representation learning. We oversee the entire model development process, from building large-scale learning data pipelines to designing model architectures, optimizing distributed learning, and creating evaluation frameworks. With access to top-tier GPU resources such as the NVIDIA B300, we can rapidly conduct large-scale experiments.In an environment with a short gap between research and production, we closely collaborate with the Search, Product, and Infrastructure teams to continuously enhance the quality of models used by thousands of clients worldwide.Role OverviewAs the ML Research Engineering Manager on the Marengo team, you will lead and develop the research engineering group responsible for Twelve Labs' multimodal embedding models. You will own the technical roadmap for the team and support the growth of its engineers.This is a player-coach role where you will manage a team of ML research engineers focused on model architecture, training infrastructure, and data pipelines while remaining technically engaged enough to make sound architectural decisions and assess research directions. We seek someone who has experience building and deploying production ML systems and is eager to amplify their impact by empowering their team.
Key ResponsibilitiesManage and operate the testing infrastructure for our software and hardware solutions.Conduct operations and maintenance on bare-metal equipment utilized for evaluations.Support the evaluation infrastructure to ensure successful product validation.
Key ResponsibilitiesDevise and execute comprehensive verification strategies for block/IP/SoC, establishing test benches to facilitate effective verification at various levels.Create and implement functional testing protocols based on the established verification test plans.Lead the design verification process to successful completion, adhering to defined metrics for functional and code coverage.Analyze, troubleshoot, and rectify functional discrepancies in the design, working closely with the design team.Engage in collaboration with cross-disciplinary teams, including Design, Modeling, Emulation, and Silicon Validation, to ensure superior design quality.
Key ResponsibilitiesIntegrate and validate IP blocks within the System on Chip (SoC) architecture while assisting in the physical implementation process.Formulate specifications, architecture, and operational scenarios at the SoC level.Comprehend standard interface specifications (e.g., PCIe) and chip operational scenarios to effectively configure integrated IP blocks.Conduct performance analyses through chip-level simulations focusing on bus and memory bandwidth, alongside FPGA prototyping.
Key ResponsibilitiesDesign, develop, and sustain Linux PCIe device drivers and kernel modules to optimize system performance.Enhance PCIe subsystem functionality, focusing on DMA, IOMMU, interrupts, and BAR mapping.Create user-space libraries and APIs that facilitate high-speed data transfers.Collaborate with hardware and firmware teams to construct comprehensive PCIe I/O pipelines.Devise effective memory management techniques and implement zero-copy data transfer mechanisms.
About the RoleWe are seeking a dedicated Security Engineer to report directly to the CTO. This role is instrumental in establishing the company's security framework and implementing a Zero Trust architecture across our organization.You will be responsible for developing security policies, configuring systems, managing accounts, controlling access, ensuring cloud security, and conducting penetration testing.Key ResponsibilitiesEstablish and execute an enterprise security strategy and policies based on Zero Trust principles.Build and operate identity and access management systems (SSO, IAM, RBAC, etc.).Design and configure security architecture for SaaS and cloud infrastructure (AWS, GCP, etc.).Diagnose and enhance security vulnerabilities within internal systems and services.Collaborate with development and infrastructure teams to integrate security practices (code reviews, CI/CD security, etc.).Set up and manage incident response systems and monitoring environments.
Full-time|On-site|Yongin-si, Gyeonggi-do, South Korea
At Upstage AI, we specialize in leveraging AI technology to solve complex business challenges. Our vision, "Making AI Beneficial," and our mission, "Building Intelligence for the Future of Work," drive us to deliver AI products and solutions that enhance productivity through advanced decision-making support and cost reduction. We empower businesses to innovate their processes for more effective growth.Our goal is to democratize AI technology by offering user-friendly AI solutions. We possess cutting-edge OCR technology and Key-Value extraction capabilities for meaningful information retrieval from documents. Recently, we have unveiled a Document Parsing model that analyzes various document layouts. With these advancements, Upstage AI provides tailored AI solutions that maximize operational efficiency and productivity, ensuring AI delivers significant value in real-world business contexts.We also offer Private LLM services optimized for business environments, enabling companies to enhance operational efficiency. Our API series enables easy access to world-class AI models across various sectors, contributing to our clients' business success. Among our offerings, Upstage Document AI stands out with its superior OCR and information extraction capabilities, aimed at automating and streamlining tedious document processes.The AI DevOps team is a core engineering unit focused on designing, building, developing, and operating Upstage's AI products and services to create value for our customers. We are responsible for automating the deployment of AI-based solutions and establishing a stable operational framework, designing delivery pipelines tailored to client environments and developing automation tools and operational standards for efficient deployment and maintenance.Additionally, we foster trust-based technology partnerships by understanding use cases across various industries and resolving client-specific challenges on-site.The AI DevOps team thrives on a culture of continuous improvement and innovation. We apply the latest technologies to solve practical use case problems across different industries, valuing the experience of rapid growth. For instance, to ensure the stable operation of numerous client systems, we develop internal solutions such as deployment automation tools, inspection management tools, and monitoring tools, collaborating with product teams as needed to contribute to product-level enhancements.DevOps is not just an operational unit; it is an engineering organization that technically realizes client business growth, serving as a bridge that connects development, operations, and customer experience. Here, you will find the opportunity to identify essential development and design elements in the ever-evolving AI industry and directly contribute to our clients' business success.
Part-time|$37/hr - $37/hr|Remote|Remote — South Korea
We invite you to submit your CV in English, highlighting your English proficiency level.At Mindrift, we connect talented professionals with exciting project-based AI opportunities for top technology firms, focusing on the assessment, evaluation, and enhancement of AI systems. Please note that participation is project-based and not considered permanent employment.Opportunity OverviewEach project presents distinct tasks; however, contributors might engage in the following activities:Designing innovative computational engineering problems that reflect authentic engineering workflows;Creating problems that necessitate Python programming for engineering calculations and simulations;Ensuring problems are computationally demanding and involve numerical methods or iterative solutions;Developing problems that include system design, optimization, and analysis;Base problems on real-world research challenges or practical engineering applications;Validating solutions using Python with established engineering libraries;Clearly documenting problem statements while providing verified correct answers.Ideal Candidate ProfileThis role is suited for engineers with Python expertise who are open to part-time, non-permanent project work. The ideal candidate will possess:A degree in Electrical Engineering or a related discipline;Proficiency in Python for numerical validation; familiarity with MATLAB, R, C, SQL, Numpy, Pandas, SciPy, or any domain-specific programming language is also acceptable;A minimum of 2 years of professional experience in applied, research, or teaching environments;An understanding of practical engineering constraints and approximations;Strong written English communication skills (C1+ level);Professional certifications (e.g., CMME, SAS Certifications, CAP) and experience in international or applied projects are advantageous.Application ProcessSteps include: Apply → Pass qualifications → Join a project → Complete tasks → Receive payment.Project Time CommitmentFor this project, expected task commitments are around 10–20 hours per week during active project phases, depending on specific project requirements. This is an estimate and not a guaranteed workload.Compensation DetailsContributors can earn up to $37 per hour, contingent upon their level of expertise and contribution pace. Compensation varies by project based on scope, complexity, and required skills. Note that other platform projects may present different earning levels based on their specific requirements.