Skip to main content

Privacy Policy

 

At Exp-Pi's Blog, accessible from https://blog.exp-pi.com, the privacy of our visitors is extremely important to us. This Privacy Policy document outlines the types of personal information that is received and collected and how it is used.


1. Log Files


Like many other websites, we make use of log files. The information inside the log files includes IP addresses, browser type, ISP, date/time stamp, referring/exit pages, and number of clicks. This information is used to analyze trends, administer the site, track user’s movement, and gather demographic information.


2. Cookies


We may use cookies to store information about visitors’ preferences and customize web page content based on visitors’ browser type or other information sent via their browser.


3. Google AdSense


Some of the ads may be served by Google. Google’s use of the DART cookie enables it to serve ads to users based on their visits to our site and other sites on the Internet. Users may opt out of the use of the DART cookie by visiting the Google ad and content network privacy policy at: https://policies.google.com/technologies/ads


4. Third Party Privacy Policies


You should consult the respective privacy policies of these third-party ad servers for more detailed information on their practices and for instructions about how to opt-out of certain practices.


5. Consent


By using our website, you hereby consent to our Privacy Policy and agree to its terms.

Comments

Popular posts from this blog

Performance Comparison of Multiple Image Generation Models on Apple Silicon MacBook Pro

Background Since the introduction of the Apple Silicon chip series, Apple has consistently highlighted its exceptional capabilities in image processing and AI computation. The unified memory architecture provides significantly higher memory bandwidth, enabling accelerated performance for AI model workloads. Within the community, while there is extensive discussion around models, workflows, and quantization techniques for acceleration, there is relatively little detailed data or analysis regarding their performance on Mac systems. Some users are curious about how the MacBook Pro compares to systems equipped with NVIDIA RTX discrete GPUs. They seek a balance between the portability and productivity benefits of macOS and the ability to engage in AI-related development and design tasks. Content This analysis evaluates the performance of several mainstream image generation models on an Apple Silicon MacBook Pro equipped with the M4 Max chip and 128 GB of unified memory. The selected models ...

Fine-tuning Stable Diffusion XL Natively on MacBook Pro M4 Max

Fine-tuning large models is typically done on CUDA-enabled devices. Whether using consumer-grade GPUs or specialized AI accelerator cards, the cost is often high. These setups also demand substantial power and efficient cooling, which usually requires a large desktop workstation. Alternatively, you can rent cloud computing resources by the hour using platforms like Runpod or Lambda.ai. However, this still incurs significant costs and often requires considerable time to upload data from your local machine to the cloud. Since Apple introduced its Silicon chip series, PyTorch has added support for the MPS (Metal Performance Shaders) backend on M1 and later devices, significantly improving compute performance on macOS. Thanks to the unified memory architecture of Apple Silicon, it’s possible to load larger models than what most consumer GPUs can handle, reducing the constraints imposed by limited VRAM. This allows developers to fine-tune models locally while still enjoying the portability...