This paper addresses the limitation of existing medical language models primarily focusing on English by presenting a multilingual medical language model. The contributions are threefold: (1) a multilingual medical corpus (MMedC) with 25.5B tokens in six languages; (2) a multilingual medical multi-choice question-answering benchmark (MMedBench) with rationale; and (3) an evaluation of several open-source LLMs, including those further trained on MMedC. The final model, MMed-Llama 3 (8B parameters), surpasses other open-source models on MMedBench and English benchmarks, even rivaling GPT-4.