# Feature Scaling

## Techniques

**Standard Scaler** standardizes data by subtracting the mean and dividing by the standard deviation. This is generally preferred for machine learning models as it:

* Makes all features have zero mean and unit variance, which can improve model performance.
* Ensures all features contribute equally to the model, regardless of their original units or scales.

**Min-Max Scaler** scales the data to a specific range, typically between 0 and 1. This may be useful in certain cases, but it can be problematic for machine learning models because:

* It removes information about the spread of the data (variance), which can be important for certain models.
* It can amplify the effect of outliers.


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://gautamnaik1994.gitbook.io/snippets/machine-learning/feature-engineering/feature-scaling.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
