Jump to content

Francois Normandin

Members
  • Posts

    27
  • Joined

  • Last visited

Profile Information

  • Gender
    Male

Francois Normandin's Achievements

Explorer

Explorer (4/14)

  • Conversation Starter Rare
  • First Post Rare
  • Collaborator Rare
  • Week One Done
  • One Month Later

Recent Badges

0

Reputation

  1. @felipefoz is aware of this, but for other folks out there, this is set to be part of Caraya 1.2 when released. https://github.com/JKISoftware/Caraya/issues/121
  2. @Mathilde (I just saw this thread today) Caraya is not well suited for automated code coverage calculation as it executes at the VI level, not the application level. There is no inspection performed and the framework is all geared towards individual assertions. An application could be developed, on top of Caraya, to inspect code and report on the minimum number of assertion tests to be performed to cover all conditions, but it would probably still be opened to interpretation since different assertions can be done on the same case to test all the limit conditions. As an example of this statement, let's consider the number of assertions needed to fully test a single "enum" value in your screenshot. Just from the image you provide, I can think of those assertions: "Numeric 2" is equal to 0 "Numeric 1" is same sign as "Numeric 2" and not equal 0. "Numeric 1" is opposite sign from "Numeric 2" and not equal 0. "Numeric 1" is equal to NaN "Numeric 2" is equal to NaN Those five assertions are possibly just a subset of the range of assertions to run, but they are in no mean related to achieving a 83.3% of coverage based on the following computation: "5 tests for 6 different paths". Indeed, the same limits should be tested for all 6 paths, which means that 30 assertions are the base for achieving 100% coverage. But this is really depending on the algorithm in the black box (case structures) because there is no way to determine the number of assertions needed for full coverage unless one knows the algorithm under test, unless the assertions happen directly in each case of the production code... Each generated test vector would need to be tested with multiple assertion results. If Caraya could aggregate more information that would be useful to such a wrapper application, that could certainly be entertained. At the moment, the call chain, assert name and test names are pretty much all that is available.
  3. I'd love to be able to filter out the vipm.io content with labels such as "open source" to only list the projects that have a public repository that I can fork and/or contribute to. This would filter out any package that do not accept contributions. Free does not mean opened. I know that "tags" could be useful filters, but tags do not mean that the information is verified. If there are multiple packages that provide solutions to similar functionality, I'd like to further filter down the list. On the other hand, some users might want to filter out the open source projects in favor of commercially-supported packages... Useful filters could be "open-source", "free", "free-trial", "driver", "commercially-supported", "alliance-partner", "NI", etc. Those filters should be automatically set through the package publishing process, not through tagging.
  4. @turbophil The reason we exclude the Test Suites from the list of scanned VIs is that we need to create a Test Suite when finding tests programmatically (from the CLI or through VI Server calls). Since the Caraya 1.x architecture does not support nested Test Suites, we have to exclude them from the list, otherwise we'd throw errors. It is not a fundamental choice, but rather a legacy decision to preserve backward compatibility when we upgraded from 0.6 to 1.x. The feature, I think, requires a modification of the Test Suite architecture, one that has a potentially rather large impact on the amount of testing we will need to do to ensure we don't break existing tests and workflows, so it was put aside for the moment. (This is a personal opinion, but as the main developer for this project, I'd rather first decouple the Test Manager (test engine) from the Caraya UI before adding support for nested Test Suites. That would diminish the risk of breaking existing code and be much less of a worry.) I think Jim's suggestion is currently the best workaround : having a top-level Test that ensures your tests will run in parallel as you intend it to run. It could perhaps be an interesting feature request to support a node that would find a Test Suite and run it programmatically, without wrapping it in another Test Suite. If you're interested in making this suggestion on Github, I think it could gain traction quickly: https://github.com/JKISoftware/Caraya
  5. Sure. Here is a sample project with LV2013 SP1 and a build deployed in LV2017 reproduces the issue. The package installs the example under "vi.lib/testing" folder. testing_lib_rtm_bugtest-1.0.0.1.vipVIPM RTM LVLIB Bug.zip
  6. This bug affects VIPM 2019 (not tested with earlier versions). Problem: The .mnu file located inside a library does not relink when building the library into a package. How to reproduce: In Caraya.lvlib, there is an "Application Menu.rtm" menu file that defines the menu for the interactive UI of the Basic Test Manager class. After building the package, the menu file's URL does not relink properly in Caraya.lvlib. Before build (from source Caraya.lvlib): After installed package (from toolkit location of Caraya.lvlib): Current Workaround: Extract rtm file from library.
  7. Cool indeed. Chris, you seem more happy than I am... thanks Jim. [Long shot] What about VIPM 2.x? [/Long shot]
  8. Is there anyway in the meantime to do it from a command line?
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.