Federated Learning (FL) is a promising privacy-aware distributed learning framework that can be deployed on various devices, such as mobile phones, desktops, and devices equipped with CPUs or GPUs. In the context of server-based Federated Learning as a Service (FLaas), FL enables the central server to coordinate the training process across these devices without direct access to the local data, thereby enhancing privacy and data security. Low-rank Adaptation (LoRA) is a method that fine-tunes models efficiently by focusing on a low-dimensional subspace of the model's parameters. This approach significantly reduces computational and memory costs compared to fine-tuning all parameters from scratch. When combined with FL, especially in a FLaas environment, LoRA allows for flexible and efficient deployment across diverse hardware with varying computational capabilities by adjusting the rank of the local model. However, in LoRA-enabled FL, different clients may end up training models with varying ranks, which poses a challenge for model aggregation on the server. Aggregating models of different ranks requires padding weights to a uniform shape, which can degrade the global model's performance. To address this, we propose RBLA, a novel model aggregation method designed for heterogeneous LoRA structures, preserving key features across models with different ranks. This paper analyzes the issues with current padding methods that reshape models for aggregation in a FLaas environment. Then, we introduce RBLA, a rank-based aggregation method that maintains both low-rank and high-rank features. Finally, we demonstrate the effectiveness of RBLA through comparative experiments with state-of-the-art methods.