Loading...

HWPure Header Banner


Methodology

Learn about the methodologies used in HWPure Project.

1. Objectives and Commitments

HWPure.com aims to provide neutral, transparent, and verifiable benchmark results. We are committed to presenting data as objectively as possible so that visitors can make decisions based on fair and tested figures.

 

2. Types of Devices Tested

We tested and grouped the devices into several main categories:

  • Processors (CPU): x86 and ARM, from desktop to mobile.
  • Video Cards (GPU): Integrated and discrete.
  • Storage: SSD, HDD, flash drive, microSD, and UFS.
  • Memory (RAM): Single-channel, dual-channel, various generations and frequencies
  • Devices (Complete devices): Laptops, Android TV Boxes, Smartphones, SBC (Single Board Computers), and others.

 

3. Data Sources and Collection Process

3.1 Internal Benchmark

  • All current benchmark data comes from direct testing conducted by the HWPure team using their own physical devices.
  • Each test is conducted under standard and controlled conditions:
    • System in idle state (no heavy applications running in the background)
    • Normal room temperature (no overclocking or extreme cooling)
    • Testing repeated if there are anomalies or noticeable differences
  • No data was taken from other websites, vendors, or third-party benchmark results.

 

4. Benchmark Applications Used

We selected benchmark applications based on the following criteria:

  • Well-known and trusted in the community
  • Supports various platforms
  • Stable and not biased towards specific vendors

Some of the applications we use include:

  • Geekbench
  • Cinebench
  • 3Dmark
  • PCMark
  • GPUPI
  • HWBOT x265
  • PassMark
  • PiFast
  • wPrime
  • y-cruncher
  • GFXBench
  • Unigine
  • CrystalDiskMark
  • ATTO Disk Benchmark
  • AS-SSD
  • Cross Platform Disk Test

 

5. Testing Process

To maintain data integrity and provide consistent results, testing is conducted in a standardized manner in accordance with the following protocol:

5.1 Initial Conditions

  • The device is in idle mode; background applications are closed as much as possible.
  • The system is left stable for several minutes before testing begins.
  • The device can be restarted first to ensure a clean environment.
  • System monitoring is performed before and after testing (CPU load, RAM, temperature if available).

5.2 Testing Cycle

  • Each test is run 2 to 5 times, depending on the type of hardware.
  • The first test is used as the initial reference.
  • If the initial results differ significantly (>20–50%) from the reference source:
    • The test is repeated up to 5 times.
    • If the results remain fluctuating, the median value is used as the final result.
    • If the results remain extreme but consistent, the highest reasonable value can be selected with a note.
  • If the initial result is close to the reference, then the median of the first 3 trials is used.

5.3 Handling Based on Hardware Type

  • CPU, GPU, and RAM:
    • Allow 5–20 minutes between tests to avoid thermal throttling.
  • Storage media (SSD, HDD, microSD, UFS):
    • Test at least twice, with a 5–10 minute interval between tests.
    • If the two results are very similar, the first test can be used as the final result.

5.4 Technical Documentation

  • Operating system version, benchmark application, firmware, and configuration are included when available.
  • Test results are saved as screenshots.
  • Temperature and throttling information is recorded when possible.

5.5 Validation of Results and Handling of Outliers

  • Variation between experiments:
    • If the variation between test results exceeds ±10%, the results are marked as unstable.
    • Extreme values are considered outliers and excluded from the average.
  • Cross-reference:
    • Results compared to public benchmark databases where available (PassMark, Geekbench, etc.).
    • Differences >20% will be labeled “divergent” and accompanied by a note.
  • Application version consistency:
    • Results from different benchmark versions are not mixed.
    • If it must be used, the results are separated and marked with a version number.
    • All testing used the latest version of the benchmark application available at the time, to ensure the accuracy and relevance of the results.

5.6 Special Treatment for Certain Devices

  • For devices such as smartphones, tablets, low-power laptops, or SBCs:
    • The device is left idle for 10–15 minutes before testing begins.
    • Where possible, testing was conducted without a power adapter to reflect everyday usage conditions.
    • For laptops, testing was conducted with a power adapter and using the “High Performance” power plan to achieve maximum performance.
    • For smartphones, no special adjustments were made to the performance mode, and testing reflected the standard configuration of the device.
    • If severe throttling occurs during testing, the results are still recorded and labeled “throttled.”
    • Additional notes regarding thermal performance stability are included in the final report.

 

6. Handling Anomalies & Outliers

  • Validation is carried out entirely manually based on observation of results and comparison between experiments.
  • If the test results show significant differences (>10% between experiments), they are considered unstable and retested.
  • Results that are too extreme (technically unreasonable or far from the reference value) will be marked as outliers and will not be used as the main reference.

 

7. Final Score Calculation

  • The final score is the raw value generated by the benchmark application.
  • No cross-device or cross-category normalization is performed.
  • Each score is displayed as it is generated by the application, without any additional modifications.
  • This maintains transparency and makes it easier for readers to compare the results with official benchmark sources.

 

8. Transparansi & Etika

We believe in the principle:

  • No selling of rankings
  • No vendor bias
  • No changing of scores at the request of outside parties

If your device has a low score, then that's the result. We prefer transparency over sponsor pressure.

 

9. Limitations to Be Aware Of

Benchmarks are not a substitute for real-world experience. Some limitations:

  • Does not reflect long-term performance.
  • The device may throttle if the temperature is too high.
  • Scores may vary due to firmware or OS updates.

We always advise users to consider benchmarks as one factor in choosing a device.

 

10. Methodology Version

  • Current Version: 1.0
  • Effective Date: June 29, 2025
  • This methodology will be continuously updated in accordance with:
    • New release of benchmark application
    • Community input
    • Technological changes

 

11. FAQs Related to Methodology

Q: Does HWPure accept sponsorship from hardware manufacturers?

A: No. All testing and content on HWPure is conducted independently. We do not accept sponsorship, affiliation, or any form of commercial support that could influence test results, data interpretation, or information presentation.

 

Q: Are tests conducted using devices or software from specific vendors?

A: We use the public versions of benchmark applications available to the general public, without any modifications or special settings from any vendor.

 

12. Contact and Collaboration

Currently, all testing and data come from the HWPure internal team, without any contributions from outside parties. However, we remain open to:

  • Input and corrections related to methodology
  • Research collaboration or verification of benchmark results
  • Requests for audits or replication of tests, if equipment and data are available

Contact us via:

Email: [email protected]

Contact form: available on our official website (hwpure.com/contact)

 

Update at

19 hours ago

Reason for Changes

add methodology