Posts

What is bitsandbytes and uses

What is PEFT (Parameter-Efficient Fine-Tuning)

What is the Transformers library

What is LoRA (Low-Rank Adaptation)

ValueError: Some modules are dispatched on the CPU or the disk. Make sure you have enough GPU RAM to fit the quantized model. If you want to dispatch the model on the CPU or the disk while keeping these modules in 32-bit, you need to set llm_int8_enable_fp32_cpu_offload=True and pass a custom device_map to from_pretrained. Check https://huggingface.co/docs/transformers/main/en/main_classes/quantization#offload-between-cpu-and-gpu for more details.

LoRA vs QLoRA

OSError: You are trying to access a gated repo. Make sure to have access to it at https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2.

What is OLLAMMA

Time Intelligence Functions in Power BI: A Comprehensive Guide

Creating Calculated Columns and Measures in Power BI

Star Schema vs. Snowflake Schema in Power BI: Key Differences and Best Practices