Skip to content

Create bucker user using mc

Context

  • Minio setup with root user
  • Minio setup in a docker container
  • port 9000 exposed

Setup mc client

Username and password are the root credentials of the Minio server, same as the ones used to login to the Minio web interface.

using docker:

sh
docker run --rm -it -v ~/.mc:/root/.mc minio/mc alias set myminio http://<minio-server-ip>:9000 <username> <password>

You should get: Added `myminio` successfully.

Test connection

sh
docker run --rm -it -v ~/.mc:/root/.mc minio/mc ls myminio

You should see the list of existing buckets (if any).

Create and test access key

Add user

sh
docker run --rm -it -v ~/.mc:/root/.mc minio/mc admin user add myminio <access_key> <secret_key>

You should get: Added user `<access_key>` successfully.

Set policy

sh
docker run --rm -it -v ~/.mc:/root/.mc minio/mc admin policy attach myminio readwrite --user <access_key>

You should get: Successfully attached policy `readwrite` to user `<access_key>`.

Test using python

sh
echo "boto3" > requirements.txt
python3 -m venv venv
source venv/bin/activate
python3 -m pip install -r requirements.txt
python
import boto3
from botocore.client import Config

s3 = boto3.client(
    "s3",
    endpoint_url="<minio_url_ip>:<port>",
    aws_access_key_id="<access_key>",
    aws_secret_access_key="<secret_key",
    config=Config(signature_version="s3v4"),
  )

# create a txt file
with open("test.txt", "w") as f:
    f.write("Hello from MinIO!")

# upload it
s3.upload_file("test.txt", "my-bucket", "upload-test.txt")
print("file uploaded\n")

# ls
response = s3.list_objects_v2(Bucket="my-bucket")
for obj in response.get("Contents", []):
    print(f"{obj['Key']}")

And now if you go to the Minio web interface you should see the file uploaded in the my-bucket bucket, and if you download it you should see the content Hello from MinIO!.