PHPFixing
  • Privacy Policy
  • TOS
  • Ask Question
  • Contact Us
  • Home
  • PHP
  • Programming
  • SQL Injection
  • Web3.0
Showing posts with label testing. Show all posts
Showing posts with label testing. Show all posts

Monday, November 14, 2022

[FIXED] How to handle errors for inline functions inside an object

 November 14, 2022     error-handling, inline, javascript, object, testing     No comments   

Issue

At the moment I've got this:

const checkText = (t) => ({
    isNotEmpty: function () {
      if (t.length === 0) {
        throw new Error("isNotEmpty false");
      }
      return this;
    },
    isEmail: function () {
      const emailRegex = /^\w+([\.-]?\w+)*@\w+([\.-]?\w+)*(\.\w{1,20})+$/;
      if (!emailRegex.test(t)) {
        throw new Error("isEmail false");
      }
      return this;
    }
};

Using a try catch, if the text is not valid, an error is thrown and handled this way:

const isFieldValid = (email) => {
  try {
    const v = checkText(email)
      .isNotEmpty()
      .isEmail();
    if (v) {
      return true;
    }
  } catch (error) {
    return false;
  }
};

The goal is to short and clean the code, to avoiding the try catch, to have the final call in one line like this:

const isFieldValid = checkText('valid text').isNotEmpty().isEmail(); // one line

PS: I know there are library out there for validation. The code is an example.


Solution

class CheckText {
  constructor(t) {
    this.t = t;
    this.valid = true;
  }
  isNotEmpty() {
    this.valid &&= this.t.length>0;
    return this;
  }
  isEmail() {
    this.valid &&= CheckText.emailRegex.test(this.t);
    return this;
  }
  static emailRegex = /^\w+([\.-]?\w+)*@\w+([\.-]?\w+)*(\.\w{1,20})+$/;
}

const isFieldValid = new CheckText('test@example.com').isNotEmpty().isEmail().valid;
console.log(isFieldValid);

However, I'd prefer to do it like this:

const emailRegex = /^\w+([\.-]?\w+)*@\w+([\.-]?\w+)*(\.\w{1,20})+$/;
const isNotEmpty = t => t.length>0;
const isEmail = t => emailRegex.test(t);
const validate = (t, validators) => validators.every(v=>v(t));
console.log(validate('test@example.com', [isNotEmpty, isEmail]));
console.log(validate('testexample.com',  [isNotEmpty, isEmail]));
console.log(validate('',                 [isNotEmpty, isEmail]));



Answered By - Andrew Parks
Answer Checked By - Senaida (PHPFixing Volunteer)
Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Stumble
  •  Digg

Sunday, November 6, 2022

[FIXED] How can I define multiple instances in molecule which differ only in name?

 November 06, 2022     ansible, docker, molecule, testing     No comments   

Issue

I've got a molecule.yml which looks a bit like this:

dependency:
  name: galaxy
driver:
  name: docker
platforms:
  - name: testohpc-compute-0
    image: docker.io/pycontribs/centos:7
    pre_build_image: true
    groups:
      - testohpc_compute
    command: /sbin/init
    tmpfs:
      - /run
      - /tmp
    volumes:
      - /sys/fs/cgroup:/sys/fs/cgroup:ro
    networks:
      - name: net1

How can I define another instance, say testohpc-compute-2 which is exactly the same except for name? Do I really need to copy all the definition from -1 again?

Furthermore, if there's a way of reusing an instance definition, can I share it between scenarios?


Solution

You can take advantage of yaml anchor and merge key features. You can find a basic explanation on Learn yaml in Y minute.

In your specific case, here is a possible solution.

platforms:
  - &default_platform
    name: testohpc-compute-0
    image: docker.io/pycontribs/centos:7
    pre_build_image: true
    groups:
      - testohpc_compute
    command: /sbin/init
    tmpfs:
      - /run
      - /tmp
    volumes:
      - /sys/fs/cgroup:/sys/fs/cgroup:ro
    networks:
      - name: net1
  - <<: *default_platform
    name: testohpc-compute-2

Note: anchors and merge keys can only be used in the same yaml file. So this will not work between different scenario.



Answered By - Zeitounator
Answer Checked By - Clifford M. (PHPFixing Volunteer)
Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Stumble
  •  Digg

Saturday, November 5, 2022

[FIXED] How to test code dependent on environment variables using JUnit?

 November 05, 2022     environment-variables, java, junit, testing, unit-testing     No comments   

Issue

I have a piece of Java code which uses an environment variable and the behaviour of the code depends on the value of this variable. I would like to test this code with different values of the environment variable. How can I do this in JUnit?

I've seen some ways to set environment variables in Java in general, but I'm more interested in unit testing aspect of it, especially considering that tests shouldn't interfere with each other.


Solution

The library System Lambda has a method withEnvironmentVariable for setting environment variables.

import static com.github.stefanbirkner.systemlambda.SystemLambda.*;

public void EnvironmentVariablesTest {
  @Test
  public void setEnvironmentVariable() {
    String value = withEnvironmentVariable("name", "value")
      .execute(() -> System.getenv("name"));
    assertEquals("value", value);
  }
}

For Java 5 to 7 the library System Rules has a JUnit rule called EnvironmentVariables.

import org.junit.contrib.java.lang.system.EnvironmentVariables;

public class EnvironmentVariablesTest {
  @Rule
  public final EnvironmentVariables environmentVariables
    = new EnvironmentVariables();

  @Test
  public void setEnvironmentVariable() {
    environmentVariables.set("name", "value");
    assertEquals("value", System.getenv("name"));
  }
}

Full disclosure: I'm the author of both libraries.



Answered By - Stefan Birkner
Answer Checked By - Katrina (PHPFixing Volunteer)
Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Stumble
  •  Digg

Friday, October 28, 2022

[FIXED] how to check if ADOdb recordset is empty

 October 28, 2022     adodb-php, is-empty, recordset, testing     No comments   

Issue

I ran into the problem while checking emptiness of the ADODB5 recordset

$db = ADONewConnection($dbdriver);
$db->Connect($dbhost, $dbuser, $dbpass, $dbname);
$rs = $db->Execute($query);

now, if I try to check recordset

if(isset($rs[0]))
...
...

I get the error Cannot use object of type ADORecordSet_mysqli as array

How do you check whether returned recordset is or isn't empty?


Solution

before isset evaluation I converted ADODB recordset to array

$ra = $rs->getRows();

then tested if $ra array is empty

if(empty($ra)){
...
...
}


Answered By - m1k3y3
Answer Checked By - Mary Flores (PHPFixing Volunteer)
Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Stumble
  •  Digg

Sunday, October 16, 2022

[FIXED] How to using .env variable in package.json

 October 16, 2022     command, cypress, package.json, testing, vue.js     No comments   

Issue

I'm using Cypress to run some tests with my VueJS project.

I just want to run test with browser I want so I made a .env like below

BROWSER=edge

And in package.json file, I write a command like this:

"scripts":{
      "cy:run" : "cypress run --browser %BROWSER%"
}

I know I can put the command like this

"cy:run" : "cypress run --browser edge"

But the reason I created an .env file is when the test is finished, I want to save my test result with the browser's name. So when I change the BROWSER in my .env, after that I just only run npm command.

But it didn't work. Cypress cannot detect which browser I wanted. I've tried so many ways, including this.

Can anyone tell me how to make it work? Super many thanks.

I've tried with a specific browser, when the test is done, test results save with the name I want, which means BROWSER in .env file is OK to use.


Solution

I figured this out, by using cross-env

First I install cross-env with npm i cross-env

In my package.json, I modified like this

"scripts":{
   "run:env" : "cross-env BROWSER=\"edge\" npm run cy:run"
   "cy:run" : "cross-env-shell cypress run --browser=$BROWSER"
 }

Then I run npm run run:env

Everything works now.

The process.env.BROWSER is still usable even I deleted the .env file



Answered By - Nguyễn Nhật Trường
Answer Checked By - Clifford M. (PHPFixing Volunteer)
Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Stumble
  •  Digg

Sunday, October 9, 2022

[FIXED] How do I create an automated test for my python script?

 October 09, 2022     continuous-integration, gitlab, python, testing, unit-testing     No comments   

Issue

I am fairly new to programming and currently working on a python script. It is supposed to gather all the files and directories that are given as paths inside the program and copy them to a new location that the user can choose as an input.

import shutil 
import os 
from pathlib import Path 
import argparse 
 
src = [  [insert name of destination directory, insert path of file/directory that  
          should be copied ]
      ]

x = input("Please choose a destination path\n>>>")
if not os.path.exists(x):
    os.makedirs(x)
    print("Directory was created")
else:
    print("Existing directory was chosen")

dest = Path(x.strip())

for pfad in src:
    
    if os.path.isdir(pfad[1]):          
        shutil.copytree(pfad[1], dest / pfad[0]) 
    
    elif os.path.isfile(pfad[1]): 
        pfad1 = Path(dest / pfad[0])
        if not os.path.exists(pfad1):
             os.makedirs(pfad1)
        
        shutil.copy(pfad[1], dest / pfad[0]) 

    else:
        print("An error occured")
        print(pfad) 

print("All files and directories have been copied!")
input()

The script itself is working just fine. The problem is that I want write a test that automatically test the code each time I push it to my GitLab repository. I have been browsing through the web for quite some time now but wasnt able to find a good explanation on how to approach creating a test for a script like this. I would be extremely thankful for any kind of feedback or hints to helpful resources.


Solution

First, you should write a test that you can run in command line. I suggest you use the argparse module to pass source and destination directories, so that you can run thescript.py source_dir dest_dir without human interaction.

Then, as you have a test you can run, you need to add a .gitlab-ci.yml to the root of the project so that you can use the gitlab CI. If you never used the gitlab CI, you need to start here: https://docs.gitlab.com/ee/ci/quick_start/

After that, you'll be able to add a job to your .gitlab-ci.yml, so that a runner with python installed will run the test. If you don't understad the bold terms of the previous sentence, you need to understant Gitlab CI first.



Answered By - ofaurax
Answer Checked By - Willingham (PHPFixing Volunteer)
Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Stumble
  •  Digg

Sunday, September 4, 2022

[FIXED] How to logout user in Laravel functional tests?

 September 04, 2022     authentication, laravel, mocking, testing     No comments   

Issue

In Laravel feature tests,

Given that user has been programmatically logged in using

$this->actingAs(self::$user, 'api');

How would I logout this user?
actingAs does not accept null as first parameter.


Solution

A good way to logout the user is

$guard = Mockery::mock(Guard::class);
$guard->expects('check')
        ->andReturns(false);

Auth::shouldReceive('guard')
       ->andReturns($guard);


Answered By - Florent Brassart
Answer Checked By - Senaida (PHPFixing Volunteer)
Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Stumble
  •  Digg

Wednesday, August 24, 2022

[FIXED] How do you test your Perl module to check if Makefile.PL declares all dependencies?

 August 24, 2022     cpan, module, perl, release, testing     No comments   

Issue

I would like to write a t/00-check-deps.t module to find all dependencies in MyModule.pm and make sure they exist in Makefile.PL before release.

This way when I do make test before distributing to CPAN I will know that nothing was forgotten prior to publishing. I've looked at the ExtUtils suite, but I've not seen anything obvious that already solves this. It seems like a common issue people would want to solve.

How would you do this?


Solution

Here is how I would do it. Thanks @ikegami for the scandeps hint:

find lib -name '*.pm' | xargs scandeps.pl -R | \
  perl -MJSON -le '
    undef $/; 
    %d=eval(<STDIN>);
    $j=JSON::from_json(`cat MYMETA.json`); 
    foreach (keys(%d)) {
      warn "Missing: $_ => $d{$_}\n" if !defined($j->{prereqs}{runtime}{requires}{$_}) 
    }
    '

prints:

Missing: Carp => 1.42
Missing: PDL::Constants => 0.02
Missing: Exporter => 5.72
Missing: constant => 1.33
Missing: PDL => 2.080
Missing: PDL::LinearAlgebra => 0.35
Missing: PDL::Ops => undef


Answered By - KJ7LNW
Answer Checked By - Willingham (PHPFixing Volunteer)
Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Stumble
  •  Digg

Sunday, August 21, 2022

[FIXED] How to set Node Environment as Env Variable in the middle of the app?

 August 21, 2022     environment-variables, javascript, node.js, testing     No comments   

Issue

I know that I can do process.env.NODE_ENV = TEST but it is not working for me. Relevant code below:

test.js

import server from "../../src/index.js";

process.env.NODE_ENV = "test";
console.log(process.env.NODE_ENV);  // returns "test"
chai.use(chaiHttp);

// calls server here with chai-http

src/index.js

import express from "express";
import dotenv from "dotenv";

dotenv.config();

const app = express();

// Some API endpoint here that calls getUserFromUserId

app.listen(port, () => {
  logger.info(`App running on http://localhost:${port}`);
});

export default app;

user.js

console.log(process.env.NODE_ENV)  // returns undefined
process.env.NODE_ENV = "test"  // manually sets it here again
console.log(process.env.NODE_ENV)  // returns test correcly this time

So the issue here is that when I run test.js, I am importing, and therefore running user.js before I set my NODE_ENV. Since imports are hoisted I can't bring the env setting earlier either. However, I need the user.js to behave differently when I am testing, and hence I need to set the NODE_ENV before running user.js code. How can I achieve that?

Edit: I tried changing my test script to 'test: SET NODE_ENV=test && mocha'. This seems to set the node env but I am still facing issue.

user.js

console.log(process.env.NODE_ENV);  // returns test
console.log(process.env.NODE_ENV === "test");  // returns false
process.env.NODE_ENV = "test";
console.log(process.env.NODE_ENV);  // returns test
console.log(process.env.NODE_ENV === "test");  // returns true

Somehow the 2 'test' are different? There is also the issue of SET being Windows-specific.


Solution

For now I have settled with installing cross-env and doing

"test" : "cross-env NODE_ENV=test mocha"

but would love to hear better suggestions.



Answered By - Samson
Answer Checked By - Clifford M. (PHPFixing Volunteer)
Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Stumble
  •  Digg

Wednesday, August 17, 2022

[FIXED] How do I show the console in a c++/CLI application?

 August 17, 2022     .net, c++-cli, output, testing     No comments   

Issue

I am working on a c++/CLI application assignment in VS 2012. For testing purposes, I am trying to print some output to the console (to test methods as I build them), but there is no console window for this windows form application. Is there a way I can get the console window to show?

Or does anyone have a suggestion as to how I can display method output/results?

Thanks.

Edit - I figured out how to get the Console Window to work. Thanks David for the response.


Solution

As @David Points out, Debug::WriteLine is an excellent way to trace or send state to the output window.

System::Diagnostics::Debug::WriteLine(L" -- Object State or Tracing");

However, if you are still wanting a console window for your windows application, consider the following:

// Beginning of Application
#if _DEBUG
    if (::AllocConsole())   // <-- http://msdn.microsoft.com/en-us/library/windows/desktop/ms681952(v=vs.85).aspx
        if (!::AttachConsole(ATTACH_PARENT_PROCESS))  // -1 == ATTACH_PARENT_PROCESS or Process ID
            System::Windows::MessageBox::Show(L"Unable to attach console window", L"Error", System::Windows::MessageBoxButton::OK, System::Windows::MessageBoxImage::Exclamation);
#endif

// Application End
#if _DEBUG
    ::FreeConsole();       // <-- http://msdn.microsoft.com/en-us/library/windows/desktop/ms683150(v=vs.85).aspx
#endif

Note that this will only be seen when built using the debug configuration.

Hope this helps.



Answered By - Jeff
Answer Checked By - Mildred Charles (PHPFixing Admin)
Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Stumble
  •  Digg

Friday, August 5, 2022

[FIXED] How to fix MojoFailureException while using spring to build web project

 August 05, 2022     eclipse, exception, maven, spring, testing     No comments   

Issue

Recently I use spring STS with roo 1.2.0.M1 to build a web project. I set up the jpa and create a entity with some field and create a repository and a service layer for the entity, and then when I perform tests, it gives me the following error:

roo> perform tests 
[INFO] Scanning for projects...
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Building WebApplication 0.1.0.BUILD-SNAPSHOT
[INFO] ------------------------------------------------------------------------
[INFO] 
[INFO] --- aspectj-maven-plugin:1.2:compile (default) @ WebApplication ---
[INFO] 
[INFO] --- maven-resources-plugin:2.5:resources (default-resources) @ WebApplication ---
[debug] execute contextualize
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] Copying 5 resources
[INFO] 
[INFO] --- maven-compiler-plugin:2.3.2:compile (default-compile) @ WebApplication ---
[INFO] Nothing to compile - all classes are up to date
[INFO] 
[INFO] --- aspectj-maven-plugin:1.2:test-compile (default) @ WebApplication ---
[INFO] 
[INFO] --- maven-resources-plugin:2.5:testResources (default-testResources) @ WebApplication ---
[debug] execute contextualize
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] Copying 0 resource
[INFO] 
[INFO] --- maven-compiler-plugin:2.3.2:testCompile (default-testCompile) @ WebApplication ---
[INFO] Nothing to compile - all classes are up to date
[INFO] 
[INFO] --- maven-surefire-plugin:2.8:test (default-test) @ WebApplication ---
[INFO] Surefire report directory: /Users/charlesli/Documents/workspace-spring/WebApplication/target/surefire-reports
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 15.936s
[INFO] Finished at: Fri Oct 28 20:59:59 EST 2011
[INFO] Final Memory: 6M/81M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.8:test (default-test) on project WebApplication: There are test failures.
[ERROR] 
[ERROR] Please refer to /Users/charlesli/Documents/workspace-spring/WebApplication/target/surefire-reports for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException

And I run the mvn test in the terminal, and I get the following errors:

[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 13.614s
[INFO] Finished at: Fri Oct 28 21:06:50 EST 2011
[INFO] Final Memory: 6M/81M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.8:test (default-test) on project WebApplication: There are test failures.
[ERROR] 
[ERROR] Please refer to /Users/charlesli/Documents/workspace-spring/WebApplication/target/surefire-reports for the individual test results.
[ERROR] -> [Help 1]
org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.8:test (default-test) on project WebApplication: There are test failures.

Please refer to /Users/charlesli/Documents/workspace-spring/WebApplication/target/surefire-reports for the individual test results.
    at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:213)
    at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:153)
    at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:145)
    at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:84)
    at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:59)
    at org.apache.maven.lifecycle.internal.LifecycleStarter.singleThreadedBuild(LifecycleStarter.java:183)
    at org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:161)
    at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:319)
    at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:156)
    at org.apache.maven.cli.MavenCli.execute(MavenCli.java:537)
    at org.apache.maven.cli.MavenCli.doMain(MavenCli.java:196)
    at org.apache.maven.cli.MavenCli.main(MavenCli.java:141)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced(Launcher.java:290)
    at org.codehaus.plexus.classworlds.launcher.Launcher.launch(Launcher.java:230)
    at org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode(Launcher.java:409)
    at org.codehaus.plexus.classworlds.launcher.Launcher.main(Launcher.java:352)
Caused by: org.apache.maven.plugin.MojoFailureException: There are test failures.

Please refer to /Users/charlesli/Documents/workspace-spring/WebApplication/target/surefire-reports for the individual test results.
    at org.apache.maven.plugin.surefire.SurefireHelper.reportExecution(SurefireHelper.java:74)
    at org.apache.maven.plugin.surefire.SurefirePlugin.writeSummary(SurefirePlugin.java:644)
    at org.apache.maven.plugin.surefire.SurefirePlugin.executeAfterPreconditionsChecked(SurefirePlugin.java:640)
    at org.apache.maven.plugin.surefire.AbstractSurefireMojo.execute(AbstractSurefireMojo.java:103)
    at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:101)
    at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:209)
    ... 19 more
[ERROR] 
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException

I use the following commands to build the project:

jpa setup --database MYSQL --provider HIBERNATE --databaseName App --hostName localhost --password root --persistenceUnit app --transactionManager appTransactionManager --userName root
entity --class ~.app.domain.DomainObjBaseModel --mappedSuperclass --persistenceUnit app --transactionManager appTransactionManager

// After running the above command, I manually add the following stuff in DomainObjBaseModel, because I don't know how to customise the roo auto generate stuff
    @Id @GeneratedValue(generator="system-uuid")
    @GenericGenerator(name="system-uuid", strategy = "uuid")
    @Column(unique = true, name = "id", nullable = false, length=32)
    private String id;
// After this action, I continue run the following commands.

entity --class ~.app.domain.Application --extends com.crazysoft.web.app.domain.DomainObjBaseModel --persistenceUnit app --transactionManager appTransactionManager --serializable --testAutomatically
repository jpa --interface ~.app.repository.ApplicationRepository --entity ~.app.domain.Application
service --interface ~.app.service.ApplicationService --entity ~.app.domain.Application

This is the configuration of the maven plugin:

<plugin>
            <groupId>org.apache.maven.plugins</groupId>
            <artifactId>maven-compiler-plugin</artifactId>
            <version>2.3.2</version>
            <configuration>
                <source>1.6</source>
                <target>1.6</target>
                <encoding>UTF-8</encoding>
            </configuration>
        </plugin>

After I finish the above job, and run perform tests through STS roo shell, and I get the above error.

Is there anyone know that why this exception occurs? Do I do something wrong? And how to fix it?

Please help me!

Thank you in advance!


Solution

One or more tests are not working.

Have a look at the files located at: /Users/charlesli/Documents/workspace-spring/WebApplication/target/surefire-reports (usually the bigger files contain a problem)

There you will find the test results, and the test that is broken. The Stacktrace containing in this file will guide you to the problem.

(BTW: you can run the tests in eclipse via JUnit plugin (Package explorer, right click, run as JUnit) too, then you will see the stack trace in the IDE and do not need to search in the files.)


I guess, that the DB connection is not correct. But this is only a guess.



Answered By - Ralph
Answer Checked By - Gilberto Lyons (PHPFixing Admin)
Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Stumble
  •  Digg

Tuesday, July 26, 2022

[FIXED] How to change the popup window's score to a percentage?

 July 26, 2022     dreamweaver, javascript, onsubmit, popupwindow, testing     No comments   

Issue

I am trying to create a basic IQ Test with JavaScript, containing only 6 questions.

If you get all questions right and after you click submit, the automatic pop up window will give you a score of 6. However I want it to show a percentage based on the number of correct answers.

Here's what I've tried (you'll see that my problem probably is in the variables):

function calculate()
{
    var x, y, score;
    y;

    x = document.personalinfo.firstname.value;

    score = (y*100)/6;
    window.alert("Hey " + x + ", your score is: " + score);

    if(document.IQTest.Q1[0,1,3,4].checked == true)
        score++;

    if(document.IQTest.Q2[2].checked == true)
        score++;

    if(document.IQTest.Q3[1].checked == true)
        score++;

    if(document.IQTest.Q4[3].checked == true)
        score++;

    if(document.IQTest.Q5[2].checked == true)
        score++;

    if(document.IQTest.Q6[0,2,4].checked == true)
        score++;
}

Solution

It's just that the code's a bit out of order and you mixed up two variables.

function calculate() {
var x, y, score;

score = 0;
x = document.personalinfo.firstname.value;

if(document.IQTest.Q1[0,1,3,4].checked == true) score++;
if(document.IQTest.Q2[2].checked == true) score++;
if(document.IQTest.Q3[1].checked == true) score++;
if(document.IQTest.Q4[3].checked == true) score++;
if(document.IQTest.Q5[2].checked == true) score++;
if(document.IQTest.Q6[0,2,4].checked == true) score++;

y = (score*100)/6;

window.alert("Hey " + x + ", your score is: " + y);
}


Answered By - Blue Sheep
Answer Checked By - Pedro (PHPFixing Volunteer)
Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Stumble
  •  Digg

Sunday, July 17, 2022

[FIXED] How do I prevent a Matlab test with "assertWarning" from printing warning text to the console?

 July 17, 2022     matlab, suppress-warnings, tdd, testing, warnings     No comments   

Issue

I am trying to implement a basic testing framework in Matlab as my first attempt at Test-Driven-Development. One of the tests I am trying to create is intended to verify that my function throws a specific warning under certain input conditions. My code is passing the warning-related tests as intended, however there is a huge annoyance.

When running (and passing) tests involving the "assertWarning" or "verifyWarning" functions, the warnings that are supposed to be triggered are printed to the command window and visually disrupt the printout of my test suite. Is there a way to prevent the (desired) warning from printing to the console only when being run in the tests, while still verifying that the warning is triggered? A sample test function which causes this annoying warning printout is below.

function testAcceleratorMax(testCase)
% Validate that acceleration input is forced to be <=1 and throws warning
state = [0,0,0,0]; input = [2,0];
xd = getPointMass2D_dot(state,input);
assert(isequal(xd,[0,0,1,0]),'Acceleration not ceiled correctly');
verifyWarning(testCase,@(x) getPointMass2D_dot(state,input),...
    'MATLAB:CommandedAccelOutOfBounds');
end

Solution

While it may not be the most elegant solution, I have found a much less intrusive method!

Step 1: Turn the specific warnings you are deliberately triggering to off in the test suite setup function. You could also do this and step 2 within each test function individually if needed. Even when the warning is turned off and won't print to the command window, you can access the suppressed warning using "lastwarn".

function setup(testCase)
warning('off','MATLAB:CommandedAccelOutOfBounds');
warning('off','MATLAB:CommandedSteerOutOfBounds');
end

Step 2: Turn the specific warnings back on in the test suite teardown function to reset matlab to the correct state after running the test suite.

function teardown(testCase)
warning('on','MATLAB:CommandedAccelOutOfBounds');
warning('on','MATLAB:CommandedSteerOutOfBounds');
end

Step 3: Instead of using the "verifyWarning" or "assertWarning" functions for your test, use "lastwarn" and "strcmp".

function testAcceleratorMax(testCase)
% Validate that acceleration input is forced to be <=1 and throws warning
state = [0,0,0,0]; input = [2,0];
xd = getPointMass2D_dot(state,input);
assert(isequal(xd,[0,0,1,0]),'Acceleration not ceiled correctly');
[~,warnID] = lastwarn; % Gets last warning, even though it was "off"
assert(strcmp(warnID,'MATLAB:CommandedAccelOutOfBounds'), 'Correct warning not thrown')
end


Answered By - pheidlauf
Answer Checked By - David Goodson (PHPFixing Volunteer)
Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Stumble
  •  Digg

Saturday, June 25, 2022

[FIXED] How to test Headers in a ReverseProxy?

 June 25, 2022     go, reverse-proxy, testing, unit-testing     No comments   

Issue

I am trying to unit test the following code:

func(h *Handler)Forward(w http.ResponseWriter, r *http.Request) {

    url, err := url.Parse("http://test.com")
    if err != nil {
       return
    }

    reverseProxy := &httputil.ReverseProxy{
        Director: func(r *http.Request) {
            r.URL.Host = url.Host
            r.URL.Path = "/"
            r.URL.Scheme = url.Scheme
            r.Host = url.Host
            r.Header.Set("X-Forwarded-Host", r.Header.Get("Host"))
        },
    }


    reverseProxy.ServeHTTP(w, r)
}

I am not able to figure out how to test whether headers are being modified by the Director function. How do we test headers in a reverseproxy in Go?


Solution

1. Inject external dependencies into your unit under test

The biggest problem I can see right now is that the URL you forward to is hard-coded in your function. That makes it very hard to unit test. So the first step would be to extract the URL from the function. Without knowing the rest of you code, Handler seems like a nice place to do this. Simplified:

type Handler struct {
    backend *url.URL
}

func NewHandler() (*Handler, error) {
    backend, err := url.Parse("http://test.com")
    if err != nil {
        return nil, err
    }
    return &Handler{backend}, nil
}

func (h *Handler) ServeHTTP(w http.ResponseWriter, r *http.Request) {
    reverseProxy := &httputil.ReverseProxy{
        Director: func(r *http.Request) {
            r.URL.Host = h.backend.Host
            r.URL.Path = "/"
            r.URL.Scheme = h.backend.Scheme
            r.Host = url.Host
            r.Header.Set("X-Forwarded-Host", r.Header.Get("Host"))      
        },
    }
    reverseProxy.ServeHTTP(w, r)
}

Note that I have renamed Forward to ServeHTTP to simplify this example.

2. Use httptest for live handler testing

The next step is to have a basic test:

func TestHandler(t *testing.T) {
    // 1. set-up a backend server
    // 2. set-up a reverse proxy with the handler we are testing
    // 3. call the reverse-proxy
    // 4. check that the backend server received the correct header

}

Let's start by filling in the simple parts:

// set-up a backend server 
backendServer := httptest.NewServer(http.DefaultServeMux)
defer backendServer.Close()

backendURL, err := url.Parse(backendServer.URL)
if err != nil {
    t.Fatal(err)
}

// set-up the reverse proxy
handler := &Handler{backend: backendURL} // <-- here we inject our own endpoint!
reverseProxy := httptest.NewServer(handler)
defer reverseProxy.Close()

reverseProxyURL, err := url.Parse(reverseProxy.URL)
if err != nil {
    t.Fatal(err)
}

// call the reverse proxy
res, err := http.Get(reverseProxy.URL)
if err != nil {
    t.Fatal(err)
}
// todo optional: assert properties of the response
_ = res


// check that the backend server received the correct header
// this comes next...

3. Communicate results from test server to test

Now what we need is a way to communicate the received header to the main test. Since our test servers can use arbitrary handlers, let's extend the set-up of our backend server.

var (
    mu     sync.Mutex
    header string
)

// create a backend server that checks the incoming headers
backendServer := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
    mu.Lock()
    defer mu.Unlock()
    header = r.Header.Get("X-Forwarded-Host")
    w.WriteHeader(http.StatusOK)
}))
defer backendServer.Close()

Note how I'm using a mutex, because the handler will run in a different go-routine. You could also use a channel.

At this point, we can implement our assertion:

mu.Lock()
got := header
mu.Unlock()

// check that the header has been set
want := reverseProxyURL.Host
if got != want {
    t.Errorf("GET %s gives header %s, got %s", reverseProxy.URL, want, got)
}

Note that this will still fail, but this time because your code under test is wrong :-) r.Header.Get("Host") should be replaced by r.Host.

Appendix: full example

package example

import (
    "net/http"
    "net/http/httptest"
    "net/http/httputil"
    "net/url"
    "sync"
    "testing"
)

type Handler struct {
    backend *url.URL
}

func NewHandler() (*Handler, error) {
    backend, err := url.Parse("http://test.com")
    if err != nil {
        return nil, err
    }
    return &Handler{backend}, nil
}

func (h *Handler) ServeHTTP(w http.ResponseWriter, r *http.Request) {
    reverseProxy := &httputil.ReverseProxy{
        Director: func(r *http.Request) {
            r.URL.Host = h.backend.Host
            r.URL.Path = "/"
            r.URL.Scheme = h.backend.Scheme
            r.Header.Set("X-Forwarded-Host", r.Host)
            r.Host = h.backend.Host
        },
    }
    reverseProxy.ServeHTTP(w, r)
}

func TestHandler(t *testing.T) {
    var (
        mu     sync.Mutex
        header string
    )

    // create a backend server that checks the incoming headers
    backendServer := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
        mu.Lock()
        defer mu.Unlock()
        header = r.Header.Get("X-Forwarded-Host")
        w.WriteHeader(http.StatusOK)
    }))
    defer backendServer.Close()

    backendURL, err := url.Parse(backendServer.URL)
    if err != nil {
        t.Fatal(err)
    }

    // create a server for your reverse proxy
    handler := &Handler{backend: backendURL}
    reverseProxy := httptest.NewServer(handler)
    defer reverseProxy.Close()

    reverseProxyURL, err := url.Parse(reverseProxy.URL)
    if err != nil {
        t.Fatal(err)
    }

    // make a request to the reverse proxy
    res, err := http.Get(reverseProxy.URL)
    if err != nil {
        t.Fatal(err)
    }
    // todo optional: assert properties of the response
    _ = res

    mu.Lock()
    got := header
    mu.Unlock()

    // check that the header has been set
    want := reverseProxyURL.Host
    if got != want {
        t.Errorf("GET %s gives header %s, got %s", reverseProxy.URL, want, got)
    }
}


Answered By - publysher
Answer Checked By - David Marino (PHPFixing Volunteer)
Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Stumble
  •  Digg

Sunday, May 15, 2022

[FIXED] How to unit test graphics with python3, CircleCI and Mayavi

 May 15, 2022     circleci, mayavi, python, testing, ubuntu     No comments   

Issue

I wrote a bunch of visualization functions in my python3 library using Mayavi. I am not very familiar with this library, nor am I with testing visualizations using python.

Ideally, I would just like the visualization code to generate some graphics on disk, I don't care too much about popping up windows (although I'm not sure to understand if Mayavi can work properly without popping such windows).

Anyway, my code works on local, but when I push it on develop, CircleCI fails at running the tests with the following error:

!/bin/bash -eo pipefail
python3 tests/test.py

qt.qpa.plugin: Could not load the Qt platform plugin "xcb" in "" even though it was found.
This application failed to start because no Qt platform plugin could be initialized. Reinstalling the application may fix this problem.

Available platform plugins are: eglfs, linuxfb, minimal, minimalegl, offscreen, vnc, wayland-egl, wayland, wayland-xcomposite-egl, wayland-xcomposite-glx, webgl, xcb.


Received "aborted" signal

The Docker Image I use is the following:

FROM ubuntu:focal

ARG DEBIAN_FRONTEND=noninteractive

RUN apt-get update -y
RUN apt-get install -y --no-install-recommends\
                    vim \
                    git \
                    gcc-9 \
                    g++ \
                    build-essential \
                    libboost-all-dev \
                    cmake \
                    unzip \
                    tar \
                    ca-certificates \
                    doxygen \
                    graphviz
                    
RUN apt-get install -y libgdal-dev g++ --no-install-recommends && \
    apt-get clean -y

ENV CPLUS_INCLUDE_PATH=/usr/include/gdal
ENV C_INCLUDE_PATH=/usr/include/gdal

RUN git clone --recurse-submodules https://github.com/Becheler/quetzal-EGGS \
&& cd quetzal-EGGS \
&&  mkdir Release \
&&  cd Release \
&& cmake .. -DCMAKE_INSTALL_PREFIX="/usr/local/quetzal-EGGS" \
&& cmake --build . --config Release --target install

RUN set -xe \
    apt-get update && apt-get install -y \
    python3-pip \
    --no-install-recommends

RUN pip3 install --upgrade pip
RUN pip3 install build twine pipenv numpy # for crumbs publishing
RUN pip3 install rasterio && \
    pip3 install matplotlib && \
    pip3 install imageio && \
    pip3 install imageio-ffmpeg && \
    pip3 install pyproj && \
    pip3 install shapely && \
    pip3 install fiona && \
    pip3 install scikit-learn && \ 
    pip3 install pyimpute && \ 
    pip3 install geopandas && \
    pip3 install pygbif

########## MAYAVI 

# xcb plugin 
RUN apt-get install -y --no-install-recommends libxkbcommon-x11-0 libxcb-icccm4 libxcb-image0 libxcb-keysyms1 libxcb-randr0 libxcb-render-util0 libxcb-xinerama0 && \
    apt-get clean -y
    
RUN python3 -m pip install vtk
RUN apt-get update && apt-get install -y python3-opencv && apt-get clean -y
RUN pip3 install opencv-python
RUN pip3 install mayavi PyQt5
  
RUN pip3 install GDAL==$(gdal-config --version) pyvolve==1.0.3 quetzal-crumbs==0.0.15

# Clean to make image smaller
RUN apt-get autoclean && \
    apt-get autoremove && \
    apt-get clean && \
    rm -rf /var/lib/apt/lists/*
  • Am I missing some dependencies?
  • Should I had a -X option somewhere?
  • Should I deactivate the mlab.show() and imshow() calls in my library?

Solution

I missed a dependency, qt5-default. I ended up having these lines for Mayavi running on Docker/CircleCi:

########## MAYAVI 

# xcb plugin 
RUN apt-get install -y --no-install-recommends xvfb libxkbcommon-x11-0 libxcb-icccm4 libxcb-image0 libxcb-keysyms1 libxcb-randr0 libxcb-render-util0 libxcb-xinerama0 && \
    apt-get clean -y
# Trying to solve the weird xcfb error
RUN apt-get install -y --no-install-recommends qt5-default && \
    apt-get clean -y
RUN python3 -m pip install vtk
RUN apt-get update && apt-get install -y python3-opencv && apt-get clean -y
RUN pip3 install opencv-python
RUN pip3 install PyVirtualDisplay
RUN pip3 install mayavi PyQt5

I am pretty sure that many of these dependencies are redundant or not required. But it works, so I will leave it this way until I have some time to make a bit of cleanup.

I also added this line in my python library:

mlab.options.offscreen = True

The rational for it comes from the mayavi offscreen rendering documentation



Answered By - Arnaud Becheler
Answer Checked By - Timothy Miller (PHPFixing Admin)
Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Stumble
  •  Digg

Thursday, May 12, 2022

[FIXED] Why doesn't the "assertSelectorExists()" assertion show the failure message that I specify?

 May 12, 2022     phpunit, symfony, symfony-panther, testing     No comments   

Issue

I'm trying to show a custom message (the URL being tested) when a PHPUnit 9.5.11 test fails. in my Symfony 4.4 app.

The class is simple:

class BaseTestCase extends PantherTestCase

In my test, I've:

$client = static::createPantherClient();
$crawler = $client->request('GET', $url);
$this->assertSelectorExists('.some-class', $url); // <- this should display the tested $url, since the second argument is supposed to be the message to show on failure

But when the test fails, all I get is:

  1. App\Tests\PropertyListingTest::testListingWithFullQueryString with data set #0 ('') Facebook\WebDriver\Exception\NoSuchElementException: no such element: Unable to locate element: {"method":"css selector","selector":".some-class"} (Session info: headless chrome=97.0.4692.71)

What's going on here? If I run this:

$this->assertEquals("a", "b", "!!!!TEST FAILED!!!!");

It works as expected:

!!!!TEST FAILED!!!! Failed asserting that two strings are equal. --- Expected +++ Actual @@ @@ -'a' +'b'


Solution

That behaviour actually looks like a bug to me.

Because if you look at what assertSelectorExists does internally (see below)...

public static function assertSelectorExists(string $selector, string $message = ''): void
{
    $client = self::getClient();

    if ($client instanceof PantherClient) {
        $element = self::findElement($selector); // NoSuchElementException is thrown here
        self::assertNotNull($element, $message); // so the message is never used
        
        return;
    }

    self::assertNotEmpty($client->getCrawler()->filter($selector));
}

...you'll notice that it expects findElement to return null in case if no element was found. But the problem is that findElement doesn't return null in that case, it throws an exception instead.

To fix this, I would make my own version of assertSelectorExists like this:

$client = static::createPantherClient();
$crawler = $client->request('GET', $url);
$this->assertSelectorExistsSafe($client, '.some-class', $url);

protected function assertSelectorExistsSafe($client, $selector, $message)
{
    try {
        $element = $client->findElement($client->createWebDriverByFromLocator($selector));
    } catch (\Facebook\WebDriver\Exception\NoSuchElementException $e) {
        $element = null;
    }
    self::assertNotNull($element, $message);
}


Answered By - xtx
Answer Checked By - Senaida (PHPFixing Volunteer)
Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Stumble
  •  Digg

Thursday, April 28, 2022

[FIXED] How to ignore a warning inside a test using pytest?

 April 28, 2022     decorator, pytest, python, testing, warnings     No comments   

Issue

I am trying to test a function that uses DatetimeFields. The function I want to test is the following:

def get_pledge_frequency(last_week_pledges):
    """Returns two lists:
    pledge_frequency: containing the number of pledges per day of the last week
    weekdays: containing a letter that represents the day
    It assumes that last_week_pledges are pledges made within the last week.
    """
    pledge_frequency = []
    weekdays = []

    if last_week_pledges:
        last_7_days = [timezone.now() - timedelta(days=i) for i in range(7)]
        last_7_days.reverse()
        day_names = 'MTWTFSS'

        for day in last_7_days:
            pledge_frequency.append(
                last_week_pledges.filter(timestamp__date=day).count())
            weekdays.append(day_names[day.weekday()])

    return pledge_frequency, weekdays

I am using pytest for testing, so the test that I have implemented is the following:

pledge_frequency_ids = ['no_pledges', 'one_pledge_today',
                        'one_pledge_not_today', 'two_pledges_same_day',
                        'two_pledges_not_same_day', 'multiple_pledges_a',
                        'multiple_pledges_b']

pledge_data = [
    ('2018-03-30', [], ([], [])),
    ('2018-03-30', ['2018-03-30'], ([0] * 6 + [1], 'SSMTWTF')),
    ('2018-03-30', ['2018-03-27'], ([0, 0, 0, 1, 0, 0, 0], 'SSMTWTF')),
    ('2018-03-31', ['2018-03-29', '2018-03-29'], ([0, 0, 0, 0, 2, 0, 0], 'SMTWTFS')),
    ('2018-03-28', ['2018-03-26', '2018-03-28'], ([0, 0, 0, 0, 1, 0, 1], 'TFSSMTW')),
    ('2018-04-01', ['2018-03-26', '2018-03-26', '2018-03-27', '2018-03-28'], ([2, 1, 1, 0, 0, 0, 0], 'MTWTFSS',)),
    ('2018-03-29', ['2018-03-25', '2018-03-26', '2018-03-27', '2018-03-28'], ([0, 0, 1, 1, 1, 1, 0], 'FSSMTWT'))]

@pytest.mark.parametrize('today, pledge_information, pledge_frequency',
                         pledge_data, ids=pledge_frequency_ids)
@pytest.mark.django_db
@mock.patch('django.utils.timezone.now')
@mock.patch('pledges.models.Pledge')
def test_get_pledge_frequency(_, mock_now, social_user, today,
                              pledge_information, pledge_frequency):
    """Tests to verify correctness of get_pledge_frequency() function.
    Covering the following cases:
    * No pledges
    * One pledge today
    * One pledge not today
    * Two pledges the same day
    * Two pledges not the same day
    * Multiple pledges particular case 0
    * Multiple pledges particular case 1"""
    mock_now.return_value = timezone.datetime.strptime(today, '%Y-%m-%d')
    for pledge_info in pledge_information:
        pledge = Pledge()
        pledge.user = social_user
        pledge.save()
        pledge.timestamp = timezone.datetime.strptime(pledge_info, '%Y-%m-%d')
        pledge.save()

    last_week_pledges = Pledge.objects.all()
    expected_frequency, expected_weekdays = pledge_frequency
    expected_weekdays = list(expected_weekdays)
    actual_frequency, actual_weekdays = get_pledge_frequency(last_week_pledges)

    assert expected_frequency == actual_frequency
    assert expected_weekdays == actual_weekdays

The tests passes, but the problem is that I am getting the following warning:

RuntimeWarning: DateTimeField Pledge.timestamp received a naive datetime (2018-03-29 00:00:00) while time zone support is active.

Actually, I get several RuntimeWarning which notify the use of a naive datetime while time zone support is active.

How can I disable warnings just for this test? I found that using @pytest.mark.filterwarnings might be useful, and I have added the tag as this: @pytest.mark.filterwarnings('ignore:RuntimeWarning'). However, that didn't work, and after running the test I still have those warnings.

Does the order of where I put the decorator matters? I have tried several combinations, but it does't work yet.

In the documentation I found that I can add addopts = -p no:warnings to my pytest.ini file, but I don't want to follow this approach in case I get another test generating this warning.


Solution

According to pytest documentation, @pytest.mark.filterwarnings actually is the right approach, the problem was that the parameter that I was passing was not correct. This issue was solved by:

@pytest.mark.filterwarnings('ignore::RuntimeWarning') # notice the ::

so the test works as follows:

pledge_frequency_ids = ['no_pledges', 'one_pledge_today',
                        'one_pledge_not_today', 'two_pledges_same_day',
                        'two_pledges_not_same_day', 'multiple_pledges_a',
                        'multiple_pledges_b']

pledge_data = [
    ('2018-03-30', [], ([], [])),
    ('2018-03-30', ['2018-03-30'], ([0] * 6 + [1], 'SSMTWTF')),
    ('2018-03-30', ['2018-03-27'], ([0, 0, 0, 1, 0, 0, 0], 'SSMTWTF')),
    ('2018-03-31', ['2018-03-29', '2018-03-29'], ([0, 0, 0, 0, 2, 0, 0], 'SMTWTFS')),
    ('2018-03-28', ['2018-03-26', '2018-03-28'], ([0, 0, 0, 0, 1, 0, 1], 'TFSSMTW')),
    ('2018-04-01', ['2018-03-26', '2018-03-26', '2018-03-27', '2018-03-28'], ([2, 1, 1, 0, 0, 0, 0], 'MTWTFSS',)),
    ('2018-03-29', ['2018-03-25', '2018-03-26', '2018-03-27', '2018-03-28'], ([0, 0, 1, 1, 1, 1, 0], 'FSSMTWT'))]

@pytest.mark.parametrize('today, pledge_information, pledge_frequency',
                         pledge_data, ids=pledge_frequency_ids)
@pytest.mark.filterwarnings('ignore::RuntimeWarning')
@pytest.mark.django_db
@mock.patch('django.utils.timezone.now')
@mock.patch('pledges.models.Pledge')
def test_get_pledge_frequency(_, mock_now, social_user, today,
                              pledge_information, pledge_frequency):
    """Tests to verify correctness of get_pledge_frequency() function.
    Covering the following cases:
    * No pledges
    * One pledge today
    * One pledge not today
    * Two pledges the same day
    * Two pledges not the same day
    * Multiple pledges particular case 0
    * Multiple pledges particular case 1"""
    mock_now.return_value = timezone.datetime.strptime(today, '%Y-%m-%d')
    for pledge_info in pledge_information:
        pledge = Pledge()
        pledge.user = social_user
        pledge.save()
        pledge.timestamp = timezone.datetime.strptime(pledge_info, '%Y-%m-%d')
        pledge.save()

    last_week_pledges = Pledge.objects.all()
    expected_frequency, expected_weekdays = pledge_frequency
    expected_weekdays = list(expected_weekdays)
    actual_frequency, actual_weekdays = get_pledge_frequency(last_week_pledges)

    assert expected_frequency == actual_frequency
    assert expected_weekdays == actual_weekdays


Answered By - lmiguelvargasf
Answer Checked By - Robin (PHPFixing Admin)
Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Stumble
  •  Digg

Wednesday, April 27, 2022

[FIXED] How to avoid Rspec shared examples 'previously defined' warning?

 April 27, 2022     rspec, ruby, ruby-on-rails, testing, warnings     No comments   

Issue

I am trying to learn how to use Rspec's shared examples feature and am getting a warning when I run my tests:

WARNING: Shared example group 'required attributes' has been previously defined at:
  /Users/me/app/spec/support/shared_examples/required_attributes_spec.rb:1
...and you are now defining it at:
  /Users/me/app/spec/support/shared_examples/required_attributes_spec.rb:1
The new definition will overwrite the original one.
....

I have read what I think is the documentation on this problem here but I'm having trouble understanding it/seeing the takeaways for my case.

Here is my shared example:

# spec/support/shared_examples/required_attributes_spec.rb

shared_examples_for 'required attributes' do |arr|
  arr.each do |meth|
    it "is invalid without #{meth}" do
      subject.send("#{meth}=", nil)
      subject.valid?
      expect(subject.errors[meth]).to eq(["can't be blank"])
    end
  end
end

I am trying to use this in a User model and a Company model. Here is what it looks like:

# spec/models/user_spec.rb

require 'rails_helper'

describe User do
  subject { build(:user) }
  include_examples 'required attributes', [:name]
end

# spec/models/company_spec.rb

require 'rails_helper'

describe Company do
  subject { build(:company) }
  include_examples 'required attributes', [:logo]
end

Per the recommendations in the Rspec docs I linked to above, I have tried changing include_examples to it_behaves_like, but that didn't help. I also commented out company_spec.rb entirely so there was just one spec using the shared example, and I am still getting the warning.

Can anyone help me see what's really going on here and what I should do in this case to avoid the warning?


Solution

I found the answer in this issue at the Rspec Github:

Just in case someone googles and lands here. If putting your file with shared examples into support folder has not fixed the following error...Make sure your filename does not end with _spec.rb.



Answered By - sixty4bit
Answer Checked By - David Marino (PHPFixing Volunteer)
Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Stumble
  •  Digg

Thursday, April 21, 2022

[FIXED] How to use javascript to run automated tests in browser using selenium+CUCUMBER

 April 21, 2022     connection, cucumber, javascript, selenium, testing     No comments   

Issue

Well, I am kinda new to this. First of all, my main goal is to execute a simple cucumber example which tests automatically something extremely simple as well.By doing this I will try to get the idea of how should i do other kind of autmated test. So, I wrote some scenarios, and I want to test them somehow on a site(e.g. google.com). The site is written in JS and therefore I need to write JavaScript code to "connect" the scenarios with the language.

I google searched things like: "How to automatically test a site using cucumber" "How to automatically run scenarios with selenium-javascript" and so on...

Any ideas? No hatefull comments please :/ Thanks in advance!

DL.


Solution

I wrote some scenarios,

When you say that I believe you are able to execute your testcases with cucumber

The site is written in JS and therefore I need to write JavaScript code to "connect" the scenarios with the language.

thats not necessary, if you are site is based on javascript like AngularJS you can still use simple java + selenium but protractor is recommended for same as it have wrapper. protractor is a nodejs based project to deal with sites based on AngularJS.

https://www.protractortest.org/#/

How to automatically test a site using cucumber

You can use a CI/CD tool like jenkins which you can trigger manually or you can put an scheduler who will run all your test script against your website on it. You can also turn on the notification so when ever the test complete it will shoot an email to respective individuals

Refer:

https://jenkins.io/

You can get n number of tutorial regarding same. example:

Click Here



Answered By - Shubham Jain
Answer Checked By - Candace Johnson (PHPFixing Volunteer)
Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Stumble
  •  Digg

[FIXED] Why PHPUnit tests run faster when the machine is disconnected from the internet?

 April 21, 2022     connection, performance, php, phpunit, testing     No comments   

Issue

I have noticed that when my laptop is connected to the internet my PHPUnit tests takes between ~90 sec ~200 sec to finish. But when I disconnect it from the internet it runs in less than 20 sec!! that makes me happy and sad at the same time!

In both cases all the tests are passing, I'm sure I'm mocking every request to external API's.

I'm using Laravel and MySQL for real data storage and in-memory sqlite for the tests environment. Also my development environment is all on running on Docker.

Is this something related to PHPUnit or to my code!! any one has an idea on what's going on. Thanks

More Info

The domain I'm using is something.dev and my API's uses api.something.dev. Every test makes at least one call to each API endpoint.

DNS! If you think this is due to DNS lookup: I changed all the domain and subdomains to 127.0.0.1 just to test it, and it didn't helped the tests are still slow. Should this eliminate the possibility of DNS lookup!

In addition I tried mocking the DNS using The PHPUnit Bridge with PHPUnit but I guess I couldn't make it work due to the lack of documentation, so I didn't knew what to pass as parameter to DnsMock::withMockedHosts([here!!]) after calling it from my setUp() function.

Something else I think the problem is related to the data storage because the delay happens before and after querying the database, mostly to store data.


Solution

Wow that wasn't expected. Turns out my tests are slow because of the image() function provided by the PHP Faker package $faker->image().

I was using it in one of my factories to prepare a fake Image for the DB, I didn't know it's literally downloading images and storing them in folder like this /private/var/folders/51/5ybn3kjn8f332jfrsx7nmam00000gn/T/.

I was able to find that by monitoring what the PHP process is doing while the test is running, to find out it has an open .jpg file in that directory, so I looked in my code anything related to images and discovered that, after about 6 hours of debugging. Happy coding :)



Answered By - Mahmoud Zalt
Answer Checked By - Pedro (PHPFixing Volunteer)
Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Stumble
  •  Digg
Older Posts Home
View mobile version

Total Pageviews

Featured Post

Why Learn PHP Programming

Why Learn PHP Programming A widely-used open source scripting language PHP is one of the most popular programming languages in the world. It...

Subscribe To

Posts
Atom
Posts
All Comments
Atom
All Comments

Copyright © PHPFixing